Design session notes: GPU acceleration in Xen
Christian König
christian.koenig at amd.com
Tue Jun 18 06:33:38 UTC 2024
Am 18.06.24 um 02:57 schrieb Demi Marie Obenour:
> On Mon, Jun 17, 2024 at 10:46:13PM +0200, Marek Marczykowski-Górecki
> wrote:
> > On Mon, Jun 17, 2024 at 09:46:29AM +0200, Roger Pau Monné wrote:
> >> On Sun, Jun 16, 2024 at 08:38:19PM -0400, Demi Marie Obenour wrote:
> >>> In both cases, the device physical
> >>> addresses are identical to dom0’s physical addresses.
> >>
> >> Yes, but a PV dom0 physical address space can be very scattered.
> >>
> >> IIRC there's an hypercall to request physically contiguous memory for
> >> PV, but you don't want to be using that every time you allocate a
> >> buffer (not sure it would support the sizes needed by the GPU
> >> anyway).
>
> > Indeed that isn't going to fly. In older Qubes versions we had PV
> > sys-net with PCI passthrough for a network card. After some uptime it
> > was basically impossible to restart and still have enough contagious
> > memory for a network driver, and there it was about _much_ smaller
> > buffers, like 2M or 4M. At least not without shutting down a lot more
> > things to free some more memory.
>
> Ouch! That makes me wonder if all GPU drivers actually need physically
> contiguous buffers, or if it is (as I suspect) driver-specific. CCing
> Christian König who has mentioned issues in this area.
Well GPUs don't need physical contiguous memory to function, but if they
only get 4k pages to work with it means a quite large (up to 30%)
performance penalty.
So scattering memory like you described is probably a very bad idea if
you want any halve way decent performance.
Regards,
Christian.
>
> Given the recent progress on PVH dom0, is it reasonable to assume that
> PVH dom0 will be ready in time for R4.3, and that therefore Qubes OS
> doesn't need to worry about this problem on x86?
More information about the dri-devel
mailing list