[virglrenderer-devel] multiprocess model and GL
Gurchetan Singh
gurchetansingh at chromium.org
Fri Jan 17 18:13:17 UTC 2020
On Thu, Jan 16, 2020 at 4:59 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
> On Mon, Jan 13, 2020 at 01:03:22PM -0800, Chia-I Wu wrote:
> > Sorry I missed this email.
> >
> > On Thu, Jan 9, 2020 at 12:54 PM Dave Airlie <airlied at gmail.com> wrote:
> > >
> > > This is just an aside to the issue discussion, but I was wondering
> > > before we heavily consider a vulkan multi-process model, could
> > > we/should we prove a GL multi-process model first?
> >
> > I think there will just be the qemu process at first, with many GL
> > contexts and one VkInstance for each guest VkInstance. Then there
> > will be a switch to run different VkInstance in different processes
> > (unlikely, but I wonder if this can be a vk layer). I did not plan
> > for a multi-process GL model. Do you have anything you want to prove
> > from it?
>
> Right now we have two models:
> - Everything in qemu.
> - Separate virgl process (see contrib/vhost-user-gpu/ in qemu),
> but still all gl contexts in a single process.
>
> We could try to switch vhost-user-gpu to a one-process-per-context
> model. I think it makes sense to at least think about the resource
> management implications this would have (it would make virgl work
> simliar to vulkan):
>
> - We would need a master process. It runs the virtqueues and manages
> the resources.
> - We would need a per-context process.
> - VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE would make master dma-buf export a
> resource and the per-context process import it. Sharing resources
> works by calling VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE multipe times for
> different contexts, and therefore master passing the dma-buf to
> multiple per-context processes.
>
> With vulkan resources would be created by the per-context process
> instead, then exported and imported by the master process. The master
> process can allow guest access then (TRANSFER virtio commands, map into
> address space, ...). It could also pass it on to other per-context
> processes, so they can import too, for sharing.
>
> This also makes it pretty obvious how sharing between virgl and vulkan
> would work: Just use VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE + dma-bufs ...
tl;dr: VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE + opaque_fds should work,
really depends on the guest/host protocol we decide
For multi-process Virgl, the import/export problem isn't very tough.
We just need to know the creation time metadata.
virgl_renderer_export(int32_t res_id, int32_t *out_fd, void
**metadata, uint32_t *metadata_size)
virgl_renderer_import(int32_t res_id, int32_t fd, void *metadata,
uint32_t metadata_size)
Even when Gallium imports a dma-buf as 3 R8 images, by the creation
time metadata and knowing the fact we control guest userspace, we can
get the right EGL images:
https://gitlab.freedesktop.org/virgl/virglrenderer/blob/master/src/virgl_egl_context.c#L462
For Vulkan, we can track the metadata in the guest with
DMA_BUF_SET_NAME/ DMA_BUF_GET_NAME. If on import, if we detect a
change to the creation time metadata (i.e, import into a different
memory index) in the guest, we can just fire off an EXECBUFFER
(something like MODIFY_INCOMING_IMPORT). Similarly, you can also do
MODIFY_INCOMING_EXPORT in EXECBUFFER.
I suspect, though, the metadata will be relatively constant (like GL) ...
We will have to send the metadata to a "virtio-gpu" aware mapping
library running in the KVM process to call vkMapMemory. And if we
allow import into a different memory index in the guest, will we allow
mappings with different caching attributes in the guest?
> cheers,
> Gerd
>
More information about the virglrenderer-devel
mailing list