Making drm_gpuvm work across gpu devices
Christian König
christian.koenig at amd.com
Wed Jan 24 08:33:12 UTC 2024
Am 23.01.24 um 20:37 schrieb Zeng, Oak:
> [SNIP]
> Yes most API are per device based.
>
> One exception I know is actually the kfd SVM API. If you look at the svm_ioctl function, it is per-process based. Each kfd_process represent a process across N gpu devices.
Yeah and that was a big mistake in my opinion. We should really not do
that ever again.
> Need to say, kfd SVM represent a shared virtual address space across CPU and all GPU devices on the system. This is by the definition of SVM (shared virtual memory). This is very different from our legacy gpu *device* driver which works for only one device (i.e., if you want one device to access another device's memory, you will have to use dma-buf export/import etc).
Exactly that thinking is what we have currently found as blocker for a
virtualization projects. Having SVM as device independent feature which
somehow ties to the process address space turned out to be an extremely
bad idea.
The background is that this only works for some use cases but not all of
them.
What's working much better is to just have a mirror functionality which
says that a range A..B of the process address space is mapped into a
range C..D of the GPU address space.
Those ranges can then be used to implement the SVM feature required for
higher level APIs and not something you need at the UAPI or even inside
the low level kernel memory management.
When you talk about migrating memory to a device you also do this on a
per device basis and *not* tied to the process address space. If you
then get crappy performance because userspace gave contradicting
information where to migrate memory then that's a bug in userspace and
not something the kernel should try to prevent somehow.
[SNIP]
>> I think if you start using the same drm_gpuvm for multiple devices you
>> will sooner or later start to run into the same mess we have seen with
>> KFD, where we moved more and more functionality from the KFD to the DRM
>> render node because we found that a lot of the stuff simply doesn't work
>> correctly with a single object to maintain the state.
> As I understand it, KFD is designed to work across devices. A single pseudo /dev/kfd device represent all hardware gpu devices. That is why during kfd open, many pdd (process device data) is created, each for one hardware device for this process.
Yes, I'm perfectly aware of that. And I can only repeat myself that I
see this design as a rather extreme failure. And I think it's one of the
reasons why NVidia is so dominant with Cuda.
This whole approach KFD takes was designed with the idea of extending
the CPU process into the GPUs, but this idea only works for a few use
cases and is not something we should apply to drivers in general.
A very good example are virtualization use cases where you end up with
CPU address != GPU address because the VAs are actually coming from the
guest VM and not the host process.
SVM is a high level concept of OpenCL, Cuda, ROCm etc.. This should not
have any influence on the design of the kernel UAPI.
If you want to do something similar as KFD for Xe I think you need to
get explicit permission to do this from Dave and Daniel and maybe even
Linus.
Regards,
Christian.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/intel-xe/attachments/20240124/5718edbd/attachment-0001.htm>
More information about the Intel-xe
mailing list