Implement svm without BO concept in xe driver

Zeng, Oak oak.zeng at intel.com
Tue Aug 22 17:50:39 UTC 2023


> -----Original Message-----
> From: Ruhl, Michael J <michael.j.ruhl at intel.com>
> Sent: August 22, 2023 7:44 AM
> To: Felix Kuehling <felix.kuehling at amd.com>; Zeng, Oak <oak.zeng at intel.com>;
> Dave Airlie <airlied at gmail.com>
> Cc: Brost, Matthew <matthew.brost at intel.com>; Thomas Hellström
> <thomas.hellstrom at linux.intel.com>; Philip Yang <Philip.Yang at amd.com>;
> Welty, Brian <brian.welty at intel.com>; dri-devel at lists.freedesktop.org; Christian
> König <christian.koenig at amd.com>; Vishwanathapura, Niranjana
> <niranjana.vishwanathapura at intel.com>; intel-xe at lists.freedesktop.org
> Subject: RE: Implement svm without BO concept in xe driver
> 
> >-----Original Message-----
> >From: Felix Kuehling <felix.kuehling at amd.com>
> >Sent: Monday, August 21, 2023 4:57 PM
> >To: Zeng, Oak <oak.zeng at intel.com>; Dave Airlie <airlied at gmail.com>
> >Cc: Brost, Matthew <matthew.brost at intel.com>; Thomas Hellström
> ><thomas.hellstrom at linux.intel.com>; Philip Yang <Philip.Yang at amd.com>;
> >Welty, Brian <brian.welty at intel.com>; dri-devel at lists.freedesktop.org;
> >Christian König <christian.koenig at amd.com>; Vishwanathapura, Niranjana
> ><niranjana.vishwanathapura at intel.com>; intel-xe at lists.freedesktop.org;
> >Ruhl, Michael J <michael.j.ruhl at intel.com>
> >Subject: Re: Implement svm without BO concept in xe driver
> >
> >
> >On 2023-08-21 15:41, Zeng, Oak wrote:
> >>> I have thought about emulating BO allocation APIs on top of system SVM.
> >>> This was in the context of KFD where memory management is not tied into
> >>> command submissions APIs, which would add a whole other layer of
> >>> complexity. The main unsolved (unsolvable?) problem I ran into was, that
> >>> there is no way to share SVM memory as DMABufs. So there is no good
> >way
> >>> to support applications that expect to share memory in that way.
> >> Great point. I also discussed the dmabuf thing with Mike (cc'ed). dmabuf is a
> >particular technology created specially for the BO driver (and other driver) to
> >share buffer b/t devices. Hmm/system SVM doesn't need this technology:
> >malloc'ed memory by the nature is already shared b/t different devices (in
> >one process) and CPU. We just can simply submit GPU kernel to all devices
> >with malloc'ed memory and let kmd decide the memory placement (such as
> >map in place or migrate). No need of buffer export/import in hmm/system
> >SVM world.
> >
> >I disagree. DMABuf can be used for sharing memory between processes. And
> >it can be used for sharing memory with 3rd-party devices via PCIe P2P
> >(e.g. a Mellanox NIC). You cannot easily do that with malloc'ed memory.
> >POSIX IPC requires that you know that you'll be sharing the memory at
> >allocation time. It adds overhead. And because it's file-backed, it's
> >currently incompatible with migration. And HMM currently doesn't have a
> >solution for P2P. Any access by a different device causes a migration to
> >system memory.
> 
> Hey Oak,
> 
> I think we were discussing this solution in the context of using the P2P_DMA
> feature.  This has an allocation path and a device 2 device capabilities.


I was thinking sharing malloc'ed memory b/t CPU and multiple devices inside one process. I thought this should work. With Felix's words above, I looked more details. Now I agree with Felix this doesn't work with hmm.

And as Felix pointed out, POSIX IPC also doesn't work with hmm. Theoretically driver can do similar migration b/t device memory and file-backed memory, just as what we did with anonymous memory. But I am not sure whether people want to do that.

Anyway, buffer sharing with hmm/system SVM seems a big open. I will not try to solve this problem for now.

Cheers,
Oak

> 
> Mike
> 
> 
> >Regards,
> >   Felix
> >
> >
> >>
> >> So yes from buffer sharing perspective, the design philosophy is also very
> >different.
> >>
> >> Thanks,
> >> Oak
> >>


More information about the dri-devel mailing list