[Lima] [PATCH 2/2] drm/lima: driver for ARM Mali4xx GPUs

Eric Anholt eric at anholt.net
Fri Mar 15 16:05:38 UTC 2019


"Koenig, Christian" <Christian.Koenig at amd.com> writes:

> Am 14.03.19 um 23:28 schrieb Eric Anholt:
>> Rob Herring <robh at kernel.org> writes:
>>
>>> On Thu, Mar 14, 2019 at 3:45 PM Dave Airlie <airlied at gmail.com> wrote:
>>>> On Thu, 14 Feb 2019 at 19:12, Christian König via dri-devel
>>>> <dri-devel at lists.freedesktop.org> wrote:
>>>>> Am 14.02.19 um 03:52 schrieb Alex Deucher via dri-devel:
>>>>>> [SNIP]
>>>>>>>>>> +static int lima_ioctl_gem_va(struct drm_device *dev, void *data, struct drm_file *file)
>>>>>>>>>> +{
>>>>>>>>>> +       struct drm_lima_gem_va *args = data;
>>>>>>>>>> +
>>>>>>>>>> +       switch (args->op) {
>>>>>>>>>> +       case LIMA_VA_OP_MAP:
>>>>>>>>>> +               return lima_gem_va_map(file, args->handle, args->flags, args->va);
>>>>>>>>>> +       case LIMA_VA_OP_UNMAP:
>>>>>>>>>> +               return lima_gem_va_unmap(file, args->handle, args->va);
>>>>>>>>> These are mapping to GPU VA. Why not do that on GEM object creation or
>>>>>>>>> import or when the objects are submitted with cmd queue as other
>>>>>>>>> drivers do?
>>>>>>>>>
>>>>>>>>> To put it another way, These ioctls look different than what other
>>>>>>>>> drivers do. Why do you need to do things differently? My understanding
>>>>>>>>> is best practice is to map and return the GPU offset when the GEM
>>>>>>>>> object is created. This is what v3d does. I think Intel is moving to
>>>>>>>>> that. And panfrost will do that.
>>>>>>>> I think it would be a good idea to look at the amdgpu driver.  This
>>>>>>>> driver is heavily modeled after it.  Basically the GEM VA ioctl allows
>>>>>>>> userspace to manage per process (per fd really) virtual addresses.
>>>>>>> Why do you want userspace to manage assigning VAs versus the kernel to
>>>>>>> do so? Exposing that detail to userspace means the driver must support
>>>>>>> a per process address space. Letting the kernel assign addresses means
>>>>>>> it can either be a single address space or be a per process address
>>>>>>> space. It seems to me more flexible to allow the kernel driver to
>>>>>>> evolve without that ABI.
>>>>>> Having it in userspace provides a lot more flexibility and makes it
>>>>>> easier to support things like unified address space between CPU and
>>>>>> GPU. I guess it depends on the hw as to what is the right choice.
>>>>> To summarize we actually have tried this approach with the radeon and it
>>>>> turned out to be a really bad mistake.
>>>>>
>>>>> To implement features like partial residential textures and shared
>>>>> virtual address space you absolutely need userspace to be in charge of
>>>>> allocating virtual addresses.
>>>>>
>>>> I think for lima not having this is fine, but for panfrost it really
>>>> should have it.
>>>>
>>>> If you can implement vulkan you probably want this, nouveau hasn't a
>>>> vulkan driver because of exactly this problem in their uapi, so maybe
>>>> adjust panfrost to do user-space managed vma.
>>> Wouldn't this just require an allocation flag to not map the BO up
>>> front and then new ioctl's like above to map and unmap at specified
>>> VAs? Seems like we could add that when we get there.
>> Sounds pretty reasonable to me.
>
> I can only advise to NOT do this.
>
> A address space manager in userspace is rather easily doable, but fixing 
> up UAPI without breaking existing applications isn't.

Can you expand on what goes wrong with Rob's idea?  The only thing I can
come up with is maybe dmabuf imports don't have a flag for
don't-automatically-map?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/lima/attachments/20190315/9dc83a85/attachment.sig>


More information about the lima mailing list