[PATCH v3 2/2] drm/lima: driver for ARM Mali4xx GPUs

Qiang Yu yuq825 at gmail.com
Wed Mar 6 02:01:22 UTC 2019


On Wed, Mar 6, 2019 at 4:18 AM Eric Anholt <eric at anholt.net> wrote:
>
> Rob Herring <robh at kernel.org> writes:
>
> > On Fri, Mar 1, 2019 at 11:23 PM Qiang Yu <yuq825 at gmail.com> wrote:
> >>
> >> > > +static struct lima_fence *lima_fence_create(struct lima_sched_pipe *pipe)
> >> > > +{
> >> > > +       struct lima_fence *fence;
> >> > > +
> >> > > +       fence = kmem_cache_zalloc(lima_fence_slab, GFP_KERNEL);
> >> >
> >> > Out of curiosity, what is the benefit of using a separate slab here?
> >> > If this is beneficial, then other drivers should do this too and it
> >> > should be common. Otherwise, it adds some complexity.
> >>
> >> fence is pretty frequently alloc free struct, so make it a slab. And it's used
> >> in get/put pattern, so may live longer than embedded struct. This is referenced
> >> from amdgpu driver.
> >>
> >> >
> >> > And maybe the slab should be initialzed in probe rather than module_init.
> >> >
> >> Either way is OK. But live in module init is easier not to init twice
> >> for two devices.
> >
> > True, but I was thinking more about initializing it for 0 devices
> > which can be common if built-in on a multi-platform kernel.

I think it's more common to build this driver to a loadable module.
But your concern make sense for build in case, I'll move this to
probe.

> >
> >> > > +int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo *bo, bool create)
> >> > > +{
> >> > > +       struct lima_bo_va *bo_va;
> >> > > +       int err;
> >> > > +
> >> > > +       mutex_lock(&bo->lock);
> >> > > +
> >> > > +       bo_va = lima_vm_bo_find(vm, bo);
> >> > > +       if (bo_va) {
> >> > > +               bo_va->ref_count++;
> >> > > +               mutex_unlock(&bo->lock);
> >> > > +               return 0;
> >> > > +       }
> >> > > +
> >> > > +       /* should not create new bo_va if not asked by caller */
> >> > > +       if (!create) {
> >> > > +               mutex_unlock(&bo->lock);
> >> > > +               return -ENOENT;
> >> > > +       }
> >> > > +
> >> > > +       bo_va = kzalloc(sizeof(*bo_va), GFP_KERNEL);
> >> > > +       if (!bo_va) {
> >> > > +               err = -ENOMEM;
> >> > > +               goto err_out0;
> >> > > +       }
> >> > > +
> >> > > +       bo_va->vm = vm;
> >> > > +       bo_va->ref_count = 1;
> >> > > +
> >> > > +       mutex_lock(&vm->lock);
> >> > > +
> >> > > +       err = drm_mm_insert_node(&vm->mm, &bo_va->node, bo->gem.size);
> >> > > +       if (err)
> >> > > +               goto err_out1;
> >> > > +
> >> > > +       err = lima_vm_map_page_table(vm, bo->pages_dma_addr, bo_va->node.start,
> >> > > +                                    bo_va->node.start + bo_va->node.size - 1);
> >> > > +       if (err)
> >> > > +               goto err_out2;
> >> > > +
> >> > > +       mutex_unlock(&vm->lock);
> >> > > +
> >> > > +       list_add_tail(&bo_va->list, &bo->va);
> >> >
> >> > So you can have 1 BO at multiple VAs? Is that really needed?
> >> >
> >> Actually 1 BO can't have multi VA in single VM, but one VA in each VM.
> >> When a BO is exported/imported between two process, i.e. xserver and client,
> >> two processes have different VM, so can't make sure it can be mapped at the same
> >> place.
> >
> > Right, but when you import a BO, a new BO struct is created and
> > therefore a new list. If there's only 1 VA, then you don't need a
> > list. Just move 'node' into lima_bo. (It is possible I missed some
> > detail though.)
>
> You only make a new GEM BO struct on importing a new dmabuf into the
> driver -- export/imports between process share the same GEM BO struct
> (unless I've misread what you're saying).

Be more clear, GEM BO struct gets created only when import dmabuf
from different devices.

Regards,
Qiang


More information about the dri-devel mailing list