Design session notes: GPU acceleration in Xen

Demi Marie Obenour demi at invisiblethingslab.com
Fri Jun 14 16:35:56 UTC 2024


On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
> On 13.06.2024 20:43, Demi Marie Obenour wrote:
> > GPU acceleration requires that pageable host memory be able to be mapped
> > into a guest.
> 
> I'm sure it was explained in the session, which sadly I couldn't attend.
> I've been asking Ray and Xenia the same before, but I'm afraid it still
> hasn't become clear to me why this is a _requirement_. After all that's
> against what we're doing elsewhere (i.e. so far it has always been
> guest memory that's mapped in the host). I can appreciate that it might
> be more difficult to implement, but avoiding to violate this fundamental
> (kind of) rule might be worth the price (and would avoid other
> complexities, of which there may be lurking more than what you enumerate
> below).

My understanding is:

- Discrete GPUs require the memory to be VRAM, rather than system RAM.
- Various APIs require dmabufs.  Xen's support for dmabufs doesn't work
  with PV dom0.
- The existing virtio-GPU protocol (which is not Xen-specific and so
  gets more testing and has broader support than anything that _is_
  Xen-specific) requires backend allocation for native contexts.
- There might be other issues (caching?  memory management?) involved.

I'm CCing dri-devel in hopes of getting a better response.

> >  This requires changes to all of the Xen hypervisor, Linux
> > kernel, and userspace device model.
> > 
> > ### Goals
> > 
> >  - Allow any userspace pages to be mapped into a guest.
> >  - Support deprivileged operation: this API must not be usable for privilege escalation.
> >  - Use MMU notifiers to ensure safety with respect to use-after-free.
> > 
> > ### Hypervisor changes
> > 
> > There are at least two Xen changes required:
> > 
> > 1. Add a new flag to IOREQ that means "retry this instruction".
> > 
> >    An IOREQ server can set this flag after having successfully handled a
> >    page fault.  It is expected that the IOREQ server has successfully
> >    mapped a page into the guest at the location of the fault.
> >    Otherwise, the same fault will likely happen again.
> 
> Were there any thoughts on how to prevent this becoming an infinite loop?
> I.e. how to (a) guarantee forward progress in the guest and (b) deal with
> misbehaving IOREQ servers?

Guaranteeing forward progress is up to the IOREQ server.  If the IOREQ
server misbehaves, an infinite loop is possible, but the CPU time used
by it should be charged to the IOREQ server, so this isn't a
vulnerability.

> > 2. Add support for `XEN_DOMCTL_memory_mapping` to use system RAM, not
> >    just IOMEM.  Mappings made with `XEN_DOMCTL_memory_mapping` are
> >    guaranteed to be able to be successfully revoked with
> >    `XEN_DOMCTL_memory_mapping`, so all operations that would create
> >    extra references to the mapped memory must be forbidden.  These
> >    include, but may not be limited to:
> > 
> >    1. Granting the pages to the same or other domains.
> >    2. Mapping into another domain using `XEN_DOMCTL_memory_mapping`.
> >    3. Another domain accessing the pages using the foreign memory APIs,
> >       unless it is privileged over the domain that owns the pages.
> 
> All of which may call for actually converting the memory to kind-of-MMIO,
> with a means to later convert it back.

Would this support the case where the mapping domain is not fully
priviliged, and where it might be a PV guest?

> Jan
> 
> >    Open question: what if the other domain goes away?  Ideally,
> >    unmapping would (vacuously) succeed in this case.  Qubes OS doesn't
> >    care about domid reuse but others might.
> > 
> > ### Kernel changes
> > 
> > Linux will add support for mapping userspace memory into an emulated PCI
> > BAR.  This requires Linux to automatically revoke access when needed.
> > 
> > There will be an IOREQ server that handles page faults.  The discussion
> > assumed that this handling will happen in kernel mode, but if handling
> > in user mode is simpler that is also an option.
> > 
> > There is no async #PF in Xen (yet), so the entire vCPU will be blocked
> > while the fault is handled.  This is not great for performance, but
> > correctness comes first.
> > 
> > There will be a new kernel ioctl to perform the mapping.  A possible C
> > prototype (presented at design session, but not discussed there):
> > 
> >     struct xen_linux_register_memory {
> >         uint64_t pointer;
> >         uint64_t size;
> >         uint64_t gpa;
> >         uint32_t id;
> >         uint32_t guest_domid;
> >     };
> 

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20240614/7a5d8562/attachment.sig>


More information about the dri-devel mailing list