[virglrenderer-devel] coherent memory access for virgl

Gerd Hoffmann kraxel at redhat.com
Tue Sep 25 09:09:58 UTC 2018


  Hi,

> > Who will do the actual allocations?  I expect we need new virglrenderer
> > functions for that?
> 
> The decision to back memory via iovecs or host memory is up to the
> VMM.

What exactly do you mean with "via iovecs"?  The current way to allocate
resources?  They are guest-allocated and the iovecs passed to
virglrenderer point into guest memory.  So that clearly is *not* in the
hands of the VMM.  Or do you mean something else?


To make sure we all are on the same page wrt. resource allocation, the
workflow we have now looks like this:

  (1) guest virtio-gpu driver allocates resource.  Uses normal (guest) ram.
      Resources can be scattered.
  (2) guest driver creates resources (RESOURCE_CREATE_*).
  (3) qemu (virgl=off) or virglrenderer (virgl=on) creates host resource.
      virglrenderer might use a different format (tiling, ...).
  (4) guest sets up backing storage (RESOURCE_ATTACH_BACKING).
  (5) qemu creates a iovec for the guest resource.
  (6) guest writes data to resource.
  (7) guest requests a transfer (TRANSFER_TO_HOST_*).
  (8) qemu or virglrenderer copy data from guest resource to
      host resource, possibly converting (again tiling, ...).
  (9) guest can use the resource now ...


One thing I'm prototyping right now is zerocopy resources, the workflow
changes to look like this:

  (2) guest additionally sets a flag to request a zerocopy buffer.
  (3) not needed (well, the bookkeeping part of it is still needed, but
      it would *not* allocate a host resource).
  (5) qemu additionally creates a host dma-buf for the guest resource
      using the udmabuf driver.
  (7+8) not needed.

Right now I have (not tested yet) code to handle dumb buffers.
Interfacing to guest userspace (virtio-gpu driver ioctls) is not
there yet.  Interfacing with virglrenderer isn't there yet either.

I expect that doesn't solve the coherent mapping issue.  The host gpu
could import the dma-buf of the resource, but as it has no control over
the allocation it might not be able to use it without copying.


I'm not sure how the API for coherent resources should look like.
One option I see is yet another resource flag, so the workflow would
change like this (with virgl=on only ...):

  (2) guest additionally sets a flag to request a coherent resource.
  (3) virglrenderer would create a coherent host resource.
  (4) guest finds some address space in the (new) pci bar and asks
      for the resource being mapped there (new command needed for
      this).
  (5) qemu maps the coherent resource into the pci bar.
  (7+8) not needed.

Probably works for GL_MAP_COHERENT_BIT use case.  Dunno about vulkan.

Interfaces to guest userspace and virglrenderer likewise need updates
to support this.


> A related question: are we going to also expose host memory to the
> guest for the non-{GL_MAP_COHERENT_BIT,
> VK_MEMORY_PROPERTY_HOST_COHERENT_BIT} cases?

The guest should be able to do that, yes.  In case both coherent and
zerocopy resources are supported by the host it can even pick.

coherent resources will be limited though (pci bar size, also because
we don't want allow guests allocate unlimited host memory for security
reasons), so using them for everything is probably not a good idea.

cheers,
  Gerd



More information about the virglrenderer-devel mailing list