xe vs amdgpu userptr handling

Christian König christian.koenig at amd.com
Thu Feb 8 06:30:21 UTC 2024


Am 08.02.24 um 01:36 schrieb Dave Airlie:
> Just cc'ing some folks. I've also added another question.
>
> On Wed, 7 Feb 2024 at 21:08, Maíra Canal <mcanal at igalia.com> wrote:
>> Adding another point to this discussion, would it make sense to somehow
>> create a generic structure that all drivers, including shmem drivers,
>> could use it?
>>
>> Best Regards,
>> - Maíra
>>
>> On 2/7/24 03:56, Dave Airlie wrote:
>>> I'm just looking over the userptr handling in both drivers, and of
>>> course they've chosen different ways to represent things. Again this
>>> is a divergence that is just going to get more annoying over time and
>>> eventually I'd like to make hmm/userptr driver independent as much as
>>> possible, so we get consistent semantics in userspace.
>>>
>>> AFAICS the main difference is that amdgpu builds the userptr handling
>>> inside a GEM object in the kernel, whereas xe doesn't bother creating
>>> a holding object and just handles things directly in the VM binding
>>> code.
>>>
>>> Is this just different thinking at different times here?
>>> like since we have VM BIND in xe, it made sense not to bother creating
>>> a gem object for userptrs?
>>> or is there some other advantages to going one way or the other?
>>>
> So the current AMD code uses hmm to do userptr work, but xe doesn't
> again, why isn't xe using hmm here, I thought I remembered scenarios
> where plain mmu_notifiers weren't sufficient.

Well amdgpu is using hmm_range_fault because that was made mandatory for 
the userptr handling.

And yes pure mmu_notifiers are not sufficient, you need the sequence 
number + retry mechanism hmm_range_fault provides.

Otherwise you open up security holes you can push an elephant through.

Regards,
Christian.

>
> Dave.



More information about the dri-devel mailing list