New KFD ioctls: taking the skeletons out of the closet

Christian König christian.koenig at amd.com
Wed Mar 7 08:41:20 UTC 2018


Am 07.03.2018 um 00:34 schrieb Jerome Glisse:
> On Tue, Mar 06, 2018 at 05:44:41PM -0500, Felix Kuehling wrote:
>> Hi all,
>>
>> Christian raised two potential issues in a recent KFD upstreaming code
>> review that are related to the KFD ioctl APIs:
>>
>>   1. behaviour of -ERESTARTSYS
>>   2. transactional nature of KFD ioctl definitions, or lack thereof
>>
>> I appreciate constructive feedback, but I also want to encourage an
>> open-minded rather than a dogmatic approach to API definitions. So let
>> me take all the skeletons out of my closet and get these APIs reviewed
>> in the appropriate forum before we commit to them upstream. See the end
>> of this email for reference.
>>
>> The controversial part at this point is kfd_ioctl_map_memory_to_gpu. If
>> any of the other APIs raise concerns or questions, please ask.
>>
>> Because of the HSA programming model, KFD memory management APIs are
>> synchronous. There is no pipelining. Command submission to GPUs through
>> user mode queues does not involve KFD. This means KFD doesn't know what
>> memory is used by the GPUs and when it's used. That means, when the
>> map_memory_to_gpu ioctl returns to user mode, all memory mapping
>> operations are complete and the memory can be used by the CPUs or GPUs
>> immediately.
>>
>> HSA also uses a shared virtual memory model, so typically memory gets
>> mapped on multiple GPUs and CPUs at the same virtual address.
> Does this means that GPU memory get pin ? Or system memory for that matter
> too. This was discuss previously but this really goes against kernel mantra
> ie kernel no longer manage resources but userspace can hog GPU memory or
> even system memory. This is bad !

Fortunately this time it is not about pinning.

All BOs which are part of the VM become a fence object when an user 
space queue is created.

Now when TTM needs to evict those buffer object it will try to wait for 
this fence object which in turn will unmap the user space queue from the 
hardware and wait for running work to finish.

After that TTM can move the BO around just like any normal GFX BO.

Regards,
Christian.

>
> Cheers,
> Jérôme



More information about the amd-gfx mailing list