[virglrenderer-devel] coherent memory access for virgl

Gerd Hoffmann kraxel at redhat.com
Thu Oct 11 11:04:36 UTC 2018


   Hi,

> * Presentation protocol data is passed between the guest and host with VSOCK
> (which doesn't support FD passing).
> 
> * There's a Wayland proxy in the guest and another in the host (probably in
> the same process that implements the virtio-gpu device).

Should work, yes.

> * wl_shm doesn't _need_ to be bridged across domains. Instead, the proxy in
> the guest can intercept wl_shm messages and implement it in terms of
> wl_dmabuf. Data would be copied from the SHM buffer to the dmabuf at the
> appropriate times. In many scenarios this is a worthy trade-off.

using udmabuf instead of copying might work too.  shmem file handles and
memfds are almost identical.  I'm not sure memfd seal ioctls work and
shmem file handles too though.

> * dmabufs for presentation refer to virtio-gpu resources in the guest, and
> in the host to BOs created by whatever graphics driver is used there (i915,
> amdgpu, etc).

Without udmabuf that would be the way, and we have to copy the data.

With udmabuf qemu can create dmabufs backed by the virtio-gpu resource,
and pass them on to the host compositor, which in turn imports those
dmabufs into the host gpu driver.

> * virtio-gpu resource IDs are placed by the guest proxy in the wl_dmabuf
> protocol stream instead of (guest) dmabuf FDs. The proxy in the host
> replaces those to the corresponding (host) dmabuf FD.

Yes.

> * When the guest maps one of such dmabufs (with gbm_bo_map), the VMM will
> map the host dmabuf (with gbm_bo_map again) and make that buffer available
> to the guest via a PCI BAR. virtio-gpu will be able to relate the new PCI
> BAR to an existing resource. Guest userspace gets an address where the PCI
> BAR was mapped and also a stride, as that's sometimes only known after the
> actual mapping has happened.

Not clear yet where we are going here, details are being hashed out.
We'll need this for coherent mappings, but it isn not clear whenever
it is of much use beyond that due to the mapping overhead.

> * There's a tradeoff to be made between allocating a PCI BAR per
> presentation buffer, or allocating big ones upfront and managing individual
> buffers within it.

Well, there will be one big pci bar.  virtio-gpu kms will manage it.
coherent gem objects will be allocated from it.  Userspace can allocate
big gem objects and use them as memory pool.

> * Some extension to the wl_dmabuf protocol may be needed to allow guests
> know what's the best way of allocating a buffer that the whole pipeline
> supports. An event will be probably needed because this information changes
> over time.

Not sure we need that in wl_dmabuf, isn't that negotiation only needed
between guest and host proxy?  Maybe not even that.  virtio-gpu kms
driver has a ioctl (DRM_VIRTGPU_GETPARAM) to allow userspace query
capabilities.

cheers,
  Gerd



More information about the virglrenderer-devel mailing list