[PATCH v3 2/2] drm/lima: driver for ARM Mali4xx GPUs

Qiang Yu yuq825 at gmail.com
Sat Mar 2 05:22:53 UTC 2019


> > +static struct lima_fence *lima_fence_create(struct lima_sched_pipe *pipe)
> > +{
> > +       struct lima_fence *fence;
> > +
> > +       fence = kmem_cache_zalloc(lima_fence_slab, GFP_KERNEL);
>
> Out of curiosity, what is the benefit of using a separate slab here?
> If this is beneficial, then other drivers should do this too and it
> should be common. Otherwise, it adds some complexity.

fence is pretty frequently alloc free struct, so make it a slab. And it's used
in get/put pattern, so may live longer than embedded struct. This is referenced
from amdgpu driver.

>
> And maybe the slab should be initialzed in probe rather than module_init.
>
Either way is OK. But live in module init is easier not to init twice
for two devices.

> > +int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo *bo, bool create)
> > +{
> > +       struct lima_bo_va *bo_va;
> > +       int err;
> > +
> > +       mutex_lock(&bo->lock);
> > +
> > +       bo_va = lima_vm_bo_find(vm, bo);
> > +       if (bo_va) {
> > +               bo_va->ref_count++;
> > +               mutex_unlock(&bo->lock);
> > +               return 0;
> > +       }
> > +
> > +       /* should not create new bo_va if not asked by caller */
> > +       if (!create) {
> > +               mutex_unlock(&bo->lock);
> > +               return -ENOENT;
> > +       }
> > +
> > +       bo_va = kzalloc(sizeof(*bo_va), GFP_KERNEL);
> > +       if (!bo_va) {
> > +               err = -ENOMEM;
> > +               goto err_out0;
> > +       }
> > +
> > +       bo_va->vm = vm;
> > +       bo_va->ref_count = 1;
> > +
> > +       mutex_lock(&vm->lock);
> > +
> > +       err = drm_mm_insert_node(&vm->mm, &bo_va->node, bo->gem.size);
> > +       if (err)
> > +               goto err_out1;
> > +
> > +       err = lima_vm_map_page_table(vm, bo->pages_dma_addr, bo_va->node.start,
> > +                                    bo_va->node.start + bo_va->node.size - 1);
> > +       if (err)
> > +               goto err_out2;
> > +
> > +       mutex_unlock(&vm->lock);
> > +
> > +       list_add_tail(&bo_va->list, &bo->va);
>
> So you can have 1 BO at multiple VAs? Is that really needed?
>
Actually 1 BO can't have multi VA in single VM, but one VA in each VM.
When a BO is exported/imported between two process, i.e. xserver and client,
two processes have different VM, so can't make sure it can be mapped at the same
place.

Regards,
Qiang


More information about the dri-devel mailing list