<!DOCTYPE html><html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
Am 26.01.24 um 09:21 schrieb Thomas Hellström:<br>
<blockquote type="cite" cite="mid:7834e2fbe8052717a4e0fa44feafa544b1fedaa0.camel@linux.intel.com">
<pre class="moz-quote-pre" wrap="">Hi, all
On Thu, 2024-01-25 at 19:32 +0100, Daniel Vetter wrote:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">On Wed, Jan 24, 2024 at 09:33:12AM +0100, Christian König wrote:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">Am 23.01.24 um 20:37 schrieb Zeng, Oak:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">[SNIP]
Yes most API are per device based.
One exception I know is actually the kfd SVM API. If you look at
the svm_ioctl function, it is per-process based. Each kfd_process
represent a process across N gpu devices.
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
Yeah and that was a big mistake in my opinion. We should really not
do that
ever again.
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">Need to say, kfd SVM represent a shared virtual address space
across CPU and all GPU devices on the system. This is by the
definition of SVM (shared virtual memory). This is very different
from our legacy gpu *device* driver which works for only one
device (i.e., if you want one device to access another device's
memory, you will have to use dma-buf export/import etc).
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
Exactly that thinking is what we have currently found as blocker
for a
virtualization projects. Having SVM as device independent feature
which
somehow ties to the process address space turned out to be an
extremely bad
idea.
The background is that this only works for some use cases but not
all of
them.
What's working much better is to just have a mirror functionality
which says
that a range A..B of the process address space is mapped into a
range C..D
of the GPU address space.
Those ranges can then be used to implement the SVM feature required
for
higher level APIs and not something you need at the UAPI or even
inside the
low level kernel memory management.
When you talk about migrating memory to a device you also do this
on a per
device basis and *not* tied to the process address space. If you
then get
crappy performance because userspace gave contradicting information
where to
migrate memory then that's a bug in userspace and not something the
kernel
should try to prevent somehow.
[SNIP]
</pre>
<blockquote type="cite">
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">I think if you start using the same drm_gpuvm for multiple
devices you
will sooner or later start to run into the same mess we have
seen with
KFD, where we moved more and more functionality from the KFD to
the DRM
render node because we found that a lot of the stuff simply
doesn't work
correctly with a single object to maintain the state.
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">As I understand it, KFD is designed to work across devices. A
single pseudo /dev/kfd device represent all hardware gpu devices.
That is why during kfd open, many pdd (process device data) is
created, each for one hardware device for this process.
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
Yes, I'm perfectly aware of that. And I can only repeat myself that
I see
this design as a rather extreme failure. And I think it's one of
the reasons
why NVidia is so dominant with Cuda.
This whole approach KFD takes was designed with the idea of
extending the
CPU process into the GPUs, but this idea only works for a few use
cases and
is not something we should apply to drivers in general.
A very good example are virtualization use cases where you end up
with CPU
address != GPU address because the VAs are actually coming from the
guest VM
and not the host process.
SVM is a high level concept of OpenCL, Cuda, ROCm etc.. This should
not have
any influence on the design of the kernel UAPI.
If you want to do something similar as KFD for Xe I think you need
to get
explicit permission to do this from Dave and Daniel and maybe even
Linus.
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
I think the one and only one exception where an SVM uapi like in kfd
makes
sense, is if the _hardware_ itself, not the software stack defined
semantics that you've happened to build on top of that hw, enforces a
1:1
mapping with the cpu process address space.
Which means your hardware is using PASID, IOMMU based translation,
PCI-ATS
(address translation services) or whatever your hw calls it and has
_no_
device-side pagetables on top. Which from what I've seen all devices
with
device-memory have, simply because they need some place to store
whether
that memory is currently in device memory or should be translated
using
PASID. Currently there's no gpu that works with PASID only, but there
are
some on-cpu-die accelerator things that do work like that.
Maybe in the future there will be some accelerators that are fully
cpu
cache coherent (including atomics) with something like CXL, and the
on-device memory is managed as normal system memory with struct page
as
ZONE_DEVICE and accelerator va -> physical address translation is
only
done with PASID ... but for now I haven't seen that, definitely not
in
upstream drivers.
And the moment you have some per-device pagetables or per-device
memory
management of some sort (like using gpuva mgr) then I'm 100% agreeing
with
Christian that the kfd SVM model is too strict and not a great idea.
Cheers, Sima
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
I'm trying to digest all the comments here, The end goal is to be able
to support something similar to this here:
<a class="moz-txt-link-freetext" href="https://developer.nvidia.com/blog/simplifying-gpu-application-development-with-heterogeneous-memory-management/">https://developer.nvidia.com/blog/simplifying-gpu-application-development-with-heterogeneous-memory-management/</a>
Christian, If I understand you correctly, you're strongly suggesting
not to try to manage a common virtual address space across different
devices in the kernel, but merely providing building blocks to do so,
like for example a generalized userptr with migration support using
HMM; That way each "mirror" of the CPU mm would be per device and
inserted into the gpu_vm just like any other gpu_vma, and user-space
would dictate the A..B -> C..D mapping by choosing the GPU_VA for the
vma.</pre>
</blockquote>
<br>
Exactly that, yes.<br>
<br>
<span style="white-space: pre-wrap">
</span>
<blockquote type="cite" cite="mid:7834e2fbe8052717a4e0fa44feafa544b1fedaa0.camel@linux.intel.com">
<pre class="moz-quote-pre" wrap="">
Sima, it sounds like you're suggesting to shy away from hmm and not
even attempt to support this except if it can be done using IOMMU sva
on selected hardware?</pre>
</blockquote>
<br>
I think that comment goes more into the direction of: If you have
ATS/ATC/PRI capable hardware which exposes the functionality to make
memory reads and writes directly into the address space of the CPU
then yes an SVM only interface is ok because the hardware can't do
anything else. But as long as you have something like GPUVM then
please don't restrict yourself.<br>
<br>
Which I totally agree on as well. The ATS/ATC/PRI combination
doesn't allow using separate page tables device and CPU and so also
not separate VAs.<br>
<br>
This was one of the reasons why we stopped using this approach for
AMD GPUs.<br>
<br>
Regards,<br>
Christian.<br>
<br>
<blockquote type="cite" cite="mid:7834e2fbe8052717a4e0fa44feafa544b1fedaa0.camel@linux.intel.com">
<pre class="moz-quote-pre" wrap="">Could you clarify a bit?
Thanks,
Thomas
</pre>
</blockquote>
<br>
</body>
</html>