[virglrenderer-devel] coherent memory access for virgl

Dave Airlie airlied at gmail.com
Tue Sep 25 23:27:33 UTC 2018


On Tue, 25 Sep 2018 at 19:10, Gerd Hoffmann <kraxel at redhat.com> wrote:
>
>   Hi,
>
> > > Who will do the actual allocations?  I expect we need new virglrenderer
> > > functions for that?
> >
> > The decision to back memory via iovecs or host memory is up to the
> > VMM.
>
> What exactly do you mean with "via iovecs"?  The current way to allocate
> resources?  They are guest-allocated and the iovecs passed to
> virglrenderer point into guest memory.  So that clearly is *not* in the
> hands of the VMM.  Or do you mean something else?
>
>
> To make sure we all are on the same page wrt. resource allocation, the
> workflow we have now looks like this:
>
>   (1) guest virtio-gpu driver allocates resource.  Uses normal (guest) ram.
>       Resources can be scattered.
>   (2) guest driver creates resources (RESOURCE_CREATE_*).
>   (3) qemu (virgl=off) or virglrenderer (virgl=on) creates host resource.
>       virglrenderer might use a different format (tiling, ...).
>   (4) guest sets up backing storage (RESOURCE_ATTACH_BACKING).
>   (5) qemu creates a iovec for the guest resource.
>   (6) guest writes data to resource.
>   (7) guest requests a transfer (TRANSFER_TO_HOST_*).
>   (8) qemu or virglrenderer copy data from guest resource to
>       host resource, possibly converting (again tiling, ...).
>   (9) guest can use the resource now ...
>
>
> One thing I'm prototyping right now is zerocopy resources, the workflow
> changes to look like this:
>
>   (2) guest additionally sets a flag to request a zerocopy buffer.
>   (3) not needed (well, the bookkeeping part of it is still needed, but
>       it would *not* allocate a host resource).
>   (5) qemu additionally creates a host dma-buf for the guest resource
>       using the udmabuf driver.
>   (7+8) not needed.
>
> Right now I have (not tested yet) code to handle dumb buffers.
> Interfacing to guest userspace (virtio-gpu driver ioctls) is not
> there yet.  Interfacing with virglrenderer isn't there yet either.
>
> I expect that doesn't solve the coherent mapping issue.  The host gpu
> could import the dma-buf of the resource, but as it has no control over
> the allocation it might not be able to use it without copying.
>
>
> I'm not sure how the API for coherent resources should look like.
> One option I see is yet another resource flag, so the workflow would
> change like this (with virgl=on only ...):
>
>   (2) guest additionally sets a flag to request a coherent resource.
>   (3) virglrenderer would create a coherent host resource.
>   (4) guest finds some address space in the (new) pci bar and asks
>       for the resource being mapped there (new command needed for
>       this).
>   (5) qemu maps the coherent resource into the pci bar.
>   (7+8) not needed.
>
> Probably works for GL_MAP_COHERENT_BIT use case.  Dunno about vulkan.

Yeah I think this is how I see it working as well.

> > A related question: are we going to also expose host memory to the
> > guest for the non-{GL_MAP_COHERENT_BIT,
> > VK_MEMORY_PROPERTY_HOST_COHERENT_BIT} cases?

Probably want to avoid it in the common case just due to the PCI BAR
sizing I suppose.

Like if we can get a 8GB 64-bit BAR then maybe we can get away with it.

>From a Vulkan point of views, apps are meant to manage their own memory,
so it's likely they will allocate coherent memory in large chunks (like 16MB)
but it could in theory ask for a single maximal chunk as well (though
it can fail).

Dave.


More information about the virglrenderer-devel mailing list