[virglrenderer-devel] multiprocess model and GL
Chia-I Wu
olvaffe at gmail.com
Wed Jan 29 19:21:27 UTC 2020
On Wed, Jan 29, 2020 at 12:46 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
> > > > > Not fully sure about that. I had the impression that Gurchetan Singh
> > > > > wants gbm allocations in gl/vk contexts too.
> > > > We have execbuffer to allocate and export gl or vk objects in the
> > > > host. The guest gbm can use them.
> > > >
> > > > I think he is looking for a simpler path that the guest gbm can use,
> > > > and the host can support using any (gl, vk, or gbm) driver. Having a
> > > > third kind of context with super simple wire format for execbuffer
> > > > satisfies the needs.
> > >
> > > Gurchetan Singh, care to clarify?
> >
> > Yes, I want a generalized allocation ioctl which is versioned. I'm
> > fine with ALLOC_BLOB allocating host VK or GL resources, and I think
> > it's weird if it supported TYPED/DUMB but not that.
>
> Question is do you need the generalized allocation ioctl *for vk/gl
> contexts*? Or can that be a separate context?
>
> We'll have to introduce different kinds of contexts when adding vulkan
> support. We could add another generic gpu context, then pick the
> allocator by context. virgl contexts would allocate via virglrenderer,
> vulkan contexts would allocate via virvkrenderer, generic gpu contexts
> would allocate via gbm/gralloc. If you want use those resources for
> gl/vk you would have to import them into another context first.
>
> Would that work for your use case?
I like the idea in general. How about simple host allocations such as
dumb (a host storage that qemu can use as the scanout) or udmabuf or
those from a dedicated heap? Do we make special cases for them or do
we also require execbuffer to allocate them?
I think it is reasonable to provide a middle ground: require
execbuffer+init_blob for complex cases, but allow embedding the host
allocation args into init_blob for simpler cases.
>
> > It sounds like you are fine with the args blob interface living in
> > virglrenderer
>
> execbuffer interface (but, yes, effectively not much of a difference to
> an args blob).
>
> Yes, it's ok to define that in virglrenderer.
>
> > I'm not sure what we should do with dumb backends. Would one ioctl,
> > two hypercalls (VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB for dumb,
> > VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 for 3D) satisfy the use case?
>
> I want sort the other resources first.
>
> Most likely it'll end up being a flag somewhere, or a separate virtio
> command just for dumb buffers which will be used instead of an
> execbuffer allocation.
>
> > > Workflow (A):
> > >
> > > (1) request resource id (no size).
> > > (2) alloc via execbuffer, host returns actual stride and size.
> > > (3) init resource + setup gem object (using returned size).
> >
> > True, but I want to support SHARED_GUEST for things like the COQOS use case.
>
> Hmm, right, for shared guest we'll need a separate metadata query.
>
> > > Workflow (B)
> > >
> > > (1) metadata query host to figure what stride and size will be.
> > > (2) initialize resource (with size), setup gem object.
> > > (3) alloc via execbuffer.
> > >
> > > IIRC Gurchetan Singh plan for metadata query is/was to have a TRY_ALLOC
> > > flag, i.e. allow a dry-run allocation just to figure size + stride (and
> > > possibly more for planar formats).
> > >
> > > So I think we should be able to skip the dry-run and go straight for the
> > > real allocation (workflow A).
> >
> > For resources with a guest backing store and a host backing store, the
> > workflow is actually:
> >
> > (1) metadata query host to figure what stride and size will be.
> > (2) alloc via execbuffer -> vmexit
>
> No need to vmexit here.
>
> > (3) alloc guest storage to attach backing -> vmexit
> >
> > Workflow (C) takes less steps, and works for host-only resources,
> > guest-only resources, and guest+host resources.
> >
> > (1) metadata query host to figure what stride and size will be.
> > (2) Allocate via resource create v2 (this model doesn't have separate
> > ATTACH_BACKING step for default resources -- for example with
> > SHARED_GUEST, it's good to have the sg-list and the metadata at the
> > same time)
>
> Note that we don't need a 1:1 relationship between ioctls and virtio
> commands. So we could add a execbuffer to INIT_BLOB, thereby changing
> the workflow to:
>
> (1) metadata query host to figure what stride and size will be (or
> maybe get that info from a cache).
> (2) get resource id
Kernel needs to manage this resid space using drm_mm or something more
suitable. Gurchetan pointed out to me that it might be simpler for
the userspace to manage a "local resource cookie" space.
The flow will be
(a) userspace generates a local object id and a local resource cookie
(b) execbuffer to allocate a host storage (a local object identified
by objid) and
to get an host fd (a local resource identified by cookie)
(c) init resource, which will transfer and take ownership of the fd
identified by (ctx_id, cookie)
This way the kernel can generate the globally unique resid like it
always does in its init resource. The userspace is not forced to get
an resid range and manages the range.
> (3) init resource (with execbuffer), which will:
> (a) submit the execbuffer.
> (b) init gem object for the resource.
> (c1) attach-backing for guest resources, or
> (c2) resource-map for host resources.
>
> The separate execbuffer ioctl goes away. Batching the virtio commands
> (notify only once) will be trivial too.
Is supporting a NO_KICK flag for execbuffer complex or has undesirable
implications?
>
> cheers,
> Gerd
>
More information about the virglrenderer-devel
mailing list