Implement svm without BO concept in xe driver

Ruhl, Michael J michael.j.ruhl at intel.com
Tue Aug 22 11:43:33 UTC 2023


>-----Original Message-----
>From: Felix Kuehling <felix.kuehling at amd.com>
>Sent: Monday, August 21, 2023 4:57 PM
>To: Zeng, Oak <oak.zeng at intel.com>; Dave Airlie <airlied at gmail.com>
>Cc: Brost, Matthew <matthew.brost at intel.com>; Thomas Hellström
><thomas.hellstrom at linux.intel.com>; Philip Yang <Philip.Yang at amd.com>;
>Welty, Brian <brian.welty at intel.com>; dri-devel at lists.freedesktop.org;
>Christian König <christian.koenig at amd.com>; Vishwanathapura, Niranjana
><niranjana.vishwanathapura at intel.com>; intel-xe at lists.freedesktop.org;
>Ruhl, Michael J <michael.j.ruhl at intel.com>
>Subject: Re: Implement svm without BO concept in xe driver
>
>
>On 2023-08-21 15:41, Zeng, Oak wrote:
>>> I have thought about emulating BO allocation APIs on top of system SVM.
>>> This was in the context of KFD where memory management is not tied into
>>> command submissions APIs, which would add a whole other layer of
>>> complexity. The main unsolved (unsolvable?) problem I ran into was, that
>>> there is no way to share SVM memory as DMABufs. So there is no good
>way
>>> to support applications that expect to share memory in that way.
>> Great point. I also discussed the dmabuf thing with Mike (cc'ed). dmabuf is a
>particular technology created specially for the BO driver (and other driver) to
>share buffer b/t devices. Hmm/system SVM doesn't need this technology:
>malloc'ed memory by the nature is already shared b/t different devices (in
>one process) and CPU. We just can simply submit GPU kernel to all devices
>with malloc'ed memory and let kmd decide the memory placement (such as
>map in place or migrate). No need of buffer export/import in hmm/system
>SVM world.
>
>I disagree. DMABuf can be used for sharing memory between processes. And
>it can be used for sharing memory with 3rd-party devices via PCIe P2P
>(e.g. a Mellanox NIC). You cannot easily do that with malloc'ed memory.
>POSIX IPC requires that you know that you'll be sharing the memory at
>allocation time. It adds overhead. And because it's file-backed, it's
>currently incompatible with migration. And HMM currently doesn't have a
>solution for P2P. Any access by a different device causes a migration to
>system memory.

Hey Oak,

I think we were discussing this solution in the context of using the P2P_DMA
feature.  This has an allocation path and a device 2 device capabilities.

Mike


>Regards,
>   Felix
>
>
>>
>> So yes from buffer sharing perspective, the design philosophy is also very
>different.
>>
>> Thanks,
>> Oak
>>


More information about the dri-devel mailing list