[virglrenderer-devel] multiprocess model and GL

Gurchetan Singh gurchetansingh at chromium.org
Thu Jan 30 01:26:54 UTC 2020


On Wed, Jan 29, 2020 at 12:46 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
> > > > > Not fully sure about that.  I had the impression that Gurchetan Singh
> > > > > wants gbm allocations in gl/vk contexts too.
> > > > We have execbuffer to allocate and export gl or vk objects in the
> > > > host.  The guest gbm can use them.
> > > >
> > > > I think he is looking for a simpler path that the guest gbm can use,
> > > > and the host can support using any (gl, vk, or gbm) driver.  Having a
> > > > third kind of context with super simple wire format for execbuffer
> > > > satisfies the needs.
> > >
> > > Gurchetan Singh, care to clarify?
> >
> > Yes, I want a generalized allocation ioctl which is versioned.  I'm
> > fine with ALLOC_BLOB allocating host VK or GL resources, and I think
> > it's weird if it supported TYPED/DUMB but not that.
>
> Question is do you need the generalized allocation ioctl *for vk/gl
> contexts*?  Or can that be a separate context?
>
> We'll have to introduce different kinds of contexts when adding vulkan
> support.  We could add another generic gpu context, then pick the
> allocator by context.  virgl contexts would allocate via virglrenderer,
> vulkan contexts would allocate via virvkrenderer, generic gpu contexts
> would allocate via gbm/gralloc.  If you want use those resources for
> gl/vk you would have to import them into another context first.

For a purely GL context, we'll need a zero copy ioctl for
GL_ARB_buffer_storage and possibly GL_EXT_memory_object_fd.  For those
cases, it makes sense to stick to a resource create flow rather than
an execbuffer based one.

You can allocate VK resources in a GL context -- in fact, that's the
way to do multi-process GL and it's important for VK inter-op.  You
can allocate GBM resources in GL contexts (we do this already).

We're emulating allocations sometimes.  It's entirely possible to use
a dma-buf heap as an allocator and import into some 3D userspace.

Userspace should decide which allocator to use or use EXECBUFFER,
regardless of context.  It sets the context and therefore should know
what's accurate virtualization for the situation.

>
> Would that work for your use case?
>
> > It sounds like you are fine with the args blob interface living in
> > virglrenderer
>
> execbuffer interface (but, yes, effectively not much of a difference to
> an args blob).

Perhaps a dumb question: why is the "execbuffer" protocol/"args"
protcol distinction important?  Using SUBMIT_CMD as a transport could
work, but I envisage the resource creation/sharing protocol as
different from the current execbuffer protocol (i.e, different
header):

https://github.com/freedesktop/virglrenderer/blob/master/src/virgl_protocol.h

Note: I think the VK renderer implementation doesn't modify the
virgl_protocol.h either, but uses EXECBUFFER.

>
> Yes, it's ok to define that in virglrenderer.
>
> > I'm not sure what we should do with dumb backends.  Would one ioctl,
> > two hypercalls (VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB for dumb,
> > VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 for 3D) satisfy the use case?
>
> I want sort the other resources first.
>
> Most likely it'll end up being a flag somewhere, or a separate virtio
> command just for dumb buffers which will be used instead of an
> execbuffer allocation.
>
> > > Workflow (A):
> > >
> > >   (1) request resource id (no size).
> > >   (2) alloc via execbuffer, host returns actual stride and size.
> > >   (3) init resource + setup gem object (using returned size).
> >
> > True, but I want to support SHARED_GUEST for things like the COQOS use case.
>
> Hmm, right, for shared guest we'll need a separate metadata query.
>
> > > Workflow (B)
> > >
> > >   (1) metadata query host to figure what stride and size will be.
> > >   (2) initialize resource (with size), setup gem object.
> > >   (3) alloc via execbuffer.
> > >
> > > IIRC Gurchetan Singh plan for metadata query is/was to have a TRY_ALLOC
> > > flag, i.e. allow a dry-run allocation just to figure size + stride (and
> > > possibly more for planar formats).
> > >
> > > So I think we should be able to skip the dry-run and go straight for the
> > > real allocation (workflow A).
> >
> > For resources with a guest backing store and a host backing store, the
> > workflow is actually:
> >
> >   (1) metadata query host to figure what stride and size will be.
> >   (2) alloc via execbuffer -> vmexit
>
> No need to vmexit here.
>
> >   (3) alloc guest storage to attach backing -> vmexit
> >
> > Workflow (C) takes less steps, and works for host-only resources,
> > guest-only resources, and guest+host resources.
> >
> > (1) metadata query host to figure what stride and size will be.
> > (2) Allocate via resource create v2 (this model doesn't have separate
> > ATTACH_BACKING step for default resources -- for example with
> > SHARED_GUEST, it's good to have the sg-list and the metadata at the
> > same time)
>
> Note that we don't need a 1:1 relationship between ioctls and virtio
> commands.  So we could add a execbuffer to INIT_BLOB, thereby changing
> the workflow to:
>
>   (1) metadata query host to figure what stride and size will be (or
>       maybe get that info from a cache).
>   (2) get resource id

As Chia pointed out, (2) is unnecessary when the object id is in the
variadic arguments.

>   (3) init resource (with execbuffer), which will:
>       (a) submit the execbuffer.
>       (b) init gem object for the resource.
>       (c1) attach-backing for guest resources, or
>       (c2) resource-map for host resources.

Batching hypercalls is easier than batching ioctls.  How easy is it to
batch 2-3 hypercalls from one kick to a submit a single command to
virglrenderer?  Otherwise, it'll be like:

virgl_renderer_submit_cmd (allocate and store away something)
virgl_renderer_resource_init (get that something just to setup
res_id->resource map)

For SHARED_GUEST, it's a bit tricker:

virgl_renderer_submit_cmd(..) --> store away/get metadata
virgl_renderer_resource_init --> associate res_id with metadata
virgl_renderer_import_fd() --> actually import resource into EGL using
res_id and stored metadata after QEMU creates udmabuf

I think the virglrenderer interface will be like::

virgl_renderer_resource_create_blob(res_id, flags, args, args_size,
int fd, num_iovecs, iovecs);

If there's no problem with batching multiple hypercalls to a single
virglrenderer command, that could work.  Otherwise, the hypercall
interface in virgl/virtio-gpu-next does the trick too:

https://gitlab.freedesktop.org/virgl/drm-misc-next/blob/virtio-gpu-next/include/uapi/linux/virtio_gpu.h#L330

>
> The separate execbuffer ioctl goes away.

Very nice, I like it!

> Batching the virtio commands
> (notify only once) will be trivial too.
>
> cheers,
>   Gerd
>


More information about the virglrenderer-devel mailing list