[virglrenderer-devel] multiprocess model and GL

Chia-I Wu olvaffe at gmail.com
Wed Jan 22 19:02:08 UTC 2020


On Wed, Jan 22, 2020 at 2:15 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
>   Hi,
>
> > > Yep, that would be good, we would not need a dma-buf for each and every
> > > resource then.  Problem here is backward compatibility.  We simply can't
> > > do that without changing the virtio protocol.
> > >
> > > So, I guess the options we have are:
> > >  (1) keep virgl mostly as-is and live with the downsides (which should
> > >      not be that much of a problem as long as one process manages all
> > >      GL contexts), or
> > >  (2) create virgl_v2, where resource management works very simliar to
> > >      the vulkan way of doing things, require the guest using that to
> > >      run gl+vk side-by-side.  Old guests without vk support could
> > >      continue to use virgl_v1
> >
> > (1) still requires defining interop with vk.  (2) seems like a
> > reasonable requirement given that both drivers will be built from
> > mesa.  But there are also APIs who like a simple interface like
> > virgl_v1 to allocate resources yet requires interop with vk.  I guess
> > both sound fine to me.
>
> I'd tend to go for (1) if we can find a reasonable model for gl/vk
> sharing.
>
> > The three resource models currently on the table are
> >
> > (A) A resource in the guest is a global driver object in the host.
> > The global driver object is usable by all contexts and qemu.
> > (B) A resource in the guest is a local driver object in the main
> > renderer process in the host.  VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE
> > creates attachments and each attachment is a local object in a
> > per-context process.  VIRTIO_GPU_CMD_SET_SCANOUT creates a local
> > object in qemu process.
> > (C) A resource in the guest is an fd in the main renderer process in
> > the host.  The fd may be created locally by the main renderer process
> > (e.g., udmabuf) or received from a per-context process.
> > VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE sends the fd to another per-context
> > process.  VIRTIO_GPU_CMD_SET_SCANOUT works similar to in (B).
>
> Can we mix (A) and (C)?  i.e. use (A) for gl objects (also dumb
> objects), effectively continuing doing what we do today.  Lazily create
> an fd if needed.  Use (C) for vk objects.  Maybe allow (C) for gl
> objects too.
>
> > (C) is the Vulkan model, but it is unclear how
> > VIRTIO_GPU_CMD_RESOURCE_CREATE_3D works.  I think we can think of the
> > main process as a simple allocator as well.
> > VIRTIO_GPU_CMD_RESOURCE_CREATE_3D makes the main process allocate
> > (from GBM or GL) and create an fd , just like how the main process can
> > allocate a udmabuf.  This way this model can work with option (1).
>
> Ah, you have basically the same idea ;)
Right.  Like you suggested, if we allow the fd in (C) to be
initialized lazily, we can support the classic
VIRTIO_GPU_CMD_RESOURCE_CREATE_3D in (C) without requiring an fd for
every allocation.  It is also reasonable to allow the fd to be
optional in (C), which is useful when we cannot export from GL
objects.  We will miss GL->VK sharing, which IMHO is acceptable.

To give a more concrete example, I think the main renderer process can have

  struct vrend_resource_v2 {
    int fd; // can be -1
    struct iovec *iov; // can be NULL
    struct vrend_resource *v1; // can be NULL
  };

The classic VIRTIO_GPU_CMD_RESOURCE_CREATE_3D creates a
vrend_resource_v2 with only v1!=NULL.  fd can be initialized lazily
from v1.

For vk, there should be a new command to create a vrend_resource_v2
with fd==-1, iov==NULL, and v1==NULL.  fd will be initialized after a
per-context process exports a local object to the main process.

VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING sets iov.
VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE sends fd and iov to a per-context
process.

It is worth noting that the main renderer process does not care how a
per-context process exports or imports fds.  vk uses explicit
EXECBUFFER for both and generates object ids, but they are merely a
contract between the per-context process and the guest userspace.
That can change behind the back of the main renderer process, the
guest kernel, and the virtio protocol.  Similarly, gl can choose to do
things differently.  In a gl per-context process, for example, it can
choose to export/import implicitly and use resource ids directly.

Other kinds of allocations can follow the vk's convention.  For
example, after an empty vrend_resource_v2 is created and an iov is
attached, there can be a command to create a udmabuf and initialize
fd.  But that takes three commands.  It might make sense to have a
more sophisticated command that creates vrend_resource_v2, initializes
iov, and creates udmabuf.

> > > Does GL have object IDs too?
> > No.  A resource in the guest is already a global GL object in the
> > host.  VIRTIO_GPU_CMD_SUBMIT_3D can use the resource ids directly.
>
> So we need to handle vk and gl resources in different ways, or figure
> something which works for both.
>
> For vk the idea is to first create the object (via execbuffer), then
> create (if needed) an resource for it.
>
> For gl we can't do that because we don't have object handles.  Most
> cases can probably be handled by using an 1:1 mapping between
> object-id and resource-id for gl objects.  Except id allocation.
> object-id us unique per context and can be allocated by userspace,
> whereas resource-id is global and must be allocated by the kernel.
>
> Hmm ...
>
> cheers,
>   Gerd
>


More information about the virglrenderer-devel mailing list