[Lima] [PATCH v3 2/2] drm/lima: driver for ARM Mali4xx GPUs

Eric Anholt eric at anholt.net
Tue Mar 5 20:18:47 UTC 2019


Rob Herring <robh at kernel.org> writes:

> On Fri, Mar 1, 2019 at 11:23 PM Qiang Yu <yuq825 at gmail.com> wrote:
>>
>> > > +static struct lima_fence *lima_fence_create(struct lima_sched_pipe *pipe)
>> > > +{
>> > > +       struct lima_fence *fence;
>> > > +
>> > > +       fence = kmem_cache_zalloc(lima_fence_slab, GFP_KERNEL);
>> >
>> > Out of curiosity, what is the benefit of using a separate slab here?
>> > If this is beneficial, then other drivers should do this too and it
>> > should be common. Otherwise, it adds some complexity.
>>
>> fence is pretty frequently alloc free struct, so make it a slab. And it's used
>> in get/put pattern, so may live longer than embedded struct. This is referenced
>> from amdgpu driver.
>>
>> >
>> > And maybe the slab should be initialzed in probe rather than module_init.
>> >
>> Either way is OK. But live in module init is easier not to init twice
>> for two devices.
>
> True, but I was thinking more about initializing it for 0 devices
> which can be common if built-in on a multi-platform kernel.
>
>> > > +int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo *bo, bool create)
>> > > +{
>> > > +       struct lima_bo_va *bo_va;
>> > > +       int err;
>> > > +
>> > > +       mutex_lock(&bo->lock);
>> > > +
>> > > +       bo_va = lima_vm_bo_find(vm, bo);
>> > > +       if (bo_va) {
>> > > +               bo_va->ref_count++;
>> > > +               mutex_unlock(&bo->lock);
>> > > +               return 0;
>> > > +       }
>> > > +
>> > > +       /* should not create new bo_va if not asked by caller */
>> > > +       if (!create) {
>> > > +               mutex_unlock(&bo->lock);
>> > > +               return -ENOENT;
>> > > +       }
>> > > +
>> > > +       bo_va = kzalloc(sizeof(*bo_va), GFP_KERNEL);
>> > > +       if (!bo_va) {
>> > > +               err = -ENOMEM;
>> > > +               goto err_out0;
>> > > +       }
>> > > +
>> > > +       bo_va->vm = vm;
>> > > +       bo_va->ref_count = 1;
>> > > +
>> > > +       mutex_lock(&vm->lock);
>> > > +
>> > > +       err = drm_mm_insert_node(&vm->mm, &bo_va->node, bo->gem.size);
>> > > +       if (err)
>> > > +               goto err_out1;
>> > > +
>> > > +       err = lima_vm_map_page_table(vm, bo->pages_dma_addr, bo_va->node.start,
>> > > +                                    bo_va->node.start + bo_va->node.size - 1);
>> > > +       if (err)
>> > > +               goto err_out2;
>> > > +
>> > > +       mutex_unlock(&vm->lock);
>> > > +
>> > > +       list_add_tail(&bo_va->list, &bo->va);
>> >
>> > So you can have 1 BO at multiple VAs? Is that really needed?
>> >
>> Actually 1 BO can't have multi VA in single VM, but one VA in each VM.
>> When a BO is exported/imported between two process, i.e. xserver and client,
>> two processes have different VM, so can't make sure it can be mapped at the same
>> place.
>
> Right, but when you import a BO, a new BO struct is created and
> therefore a new list. If there's only 1 VA, then you don't need a
> list. Just move 'node' into lima_bo. (It is possible I missed some
> detail though.)

You only make a new GEM BO struct on importing a new dmabuf into the
driver -- export/imports between process share the same GEM BO struct
(unless I've misread what you're saying).
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/lima/attachments/20190305/b51b8855/attachment.sig>


More information about the lima mailing list