[PATCH 2/2] drm/lima: driver for ARM Mali4xx GPUs

Koenig, Christian Christian.Koenig at amd.com
Fri Mar 15 08:06:08 UTC 2019


Am 14.03.19 um 23:28 schrieb Eric Anholt:
> Rob Herring <robh at kernel.org> writes:
>
>> On Thu, Mar 14, 2019 at 3:45 PM Dave Airlie <airlied at gmail.com> wrote:
>>> On Thu, 14 Feb 2019 at 19:12, Christian König via dri-devel
>>> <dri-devel at lists.freedesktop.org> wrote:
>>>> Am 14.02.19 um 03:52 schrieb Alex Deucher via dri-devel:
>>>>> [SNIP]
>>>>>>>>> +static int lima_ioctl_gem_va(struct drm_device *dev, void *data, struct drm_file *file)
>>>>>>>>> +{
>>>>>>>>> +       struct drm_lima_gem_va *args = data;
>>>>>>>>> +
>>>>>>>>> +       switch (args->op) {
>>>>>>>>> +       case LIMA_VA_OP_MAP:
>>>>>>>>> +               return lima_gem_va_map(file, args->handle, args->flags, args->va);
>>>>>>>>> +       case LIMA_VA_OP_UNMAP:
>>>>>>>>> +               return lima_gem_va_unmap(file, args->handle, args->va);
>>>>>>>> These are mapping to GPU VA. Why not do that on GEM object creation or
>>>>>>>> import or when the objects are submitted with cmd queue as other
>>>>>>>> drivers do?
>>>>>>>>
>>>>>>>> To put it another way, These ioctls look different than what other
>>>>>>>> drivers do. Why do you need to do things differently? My understanding
>>>>>>>> is best practice is to map and return the GPU offset when the GEM
>>>>>>>> object is created. This is what v3d does. I think Intel is moving to
>>>>>>>> that. And panfrost will do that.
>>>>>>> I think it would be a good idea to look at the amdgpu driver.  This
>>>>>>> driver is heavily modeled after it.  Basically the GEM VA ioctl allows
>>>>>>> userspace to manage per process (per fd really) virtual addresses.
>>>>>> Why do you want userspace to manage assigning VAs versus the kernel to
>>>>>> do so? Exposing that detail to userspace means the driver must support
>>>>>> a per process address space. Letting the kernel assign addresses means
>>>>>> it can either be a single address space or be a per process address
>>>>>> space. It seems to me more flexible to allow the kernel driver to
>>>>>> evolve without that ABI.
>>>>> Having it in userspace provides a lot more flexibility and makes it
>>>>> easier to support things like unified address space between CPU and
>>>>> GPU. I guess it depends on the hw as to what is the right choice.
>>>> To summarize we actually have tried this approach with the radeon and it
>>>> turned out to be a really bad mistake.
>>>>
>>>> To implement features like partial residential textures and shared
>>>> virtual address space you absolutely need userspace to be in charge of
>>>> allocating virtual addresses.
>>>>
>>> I think for lima not having this is fine, but for panfrost it really
>>> should have it.
>>>
>>> If you can implement vulkan you probably want this, nouveau hasn't a
>>> vulkan driver because of exactly this problem in their uapi, so maybe
>>> adjust panfrost to do user-space managed vma.
>> Wouldn't this just require an allocation flag to not map the BO up
>> front and then new ioctl's like above to map and unmap at specified
>> VAs? Seems like we could add that when we get there.
> Sounds pretty reasonable to me.

I can only advise to NOT do this.

A address space manager in userspace is rather easily doable, but fixing 
up UAPI without breaking existing applications isn't.

This is also one of the reasons why we don't support Vulkan with radeon 
on SI hardware generation.

So you got at least two drivers which have gone down this route and 
regret it now and I can perfectly understand if Dave says that any new 
DRM driver should not make the same mistake again.

Regards,
Christian.


More information about the dri-devel mailing list