[virglrenderer-devel] multiprocess model and GL

Chia-I Wu olvaffe at gmail.com
Fri Feb 14 18:26:47 UTC 2020


On Thu, Feb 13, 2020 at 7:47 PM Gurchetan Singh
<gurchetansingh at chromium.org> wrote:
>
> On Thu, Feb 13, 2020 at 11:10 AM Chia-I Wu <olvaffe at gmail.com> wrote:
> >
> > On Wed, Feb 12, 2020 at 6:39 PM Gurchetan Singh
> > <gurchetansingh at chromium.org> wrote:
> > >
> > > On Wed, Feb 12, 2020 at 3:37 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
> > > >
> > > > > > > - allocate and setup [res_id] --> [struct resource] only via (1)
> > > > > > > - allocate and setup both  [res_id] --> [struct resource], [obj_id]
> > > > > > > -->  [struct resource] mapping via (1) and (2)
> > > > > > >
> > > > > > > All user space has to do is specify the right command buffer (which
> > > > > > > should be less than 64 bytes).
> > > > > >
> > > > > > How does userspace figure the res_id?
> > > > >
> > > > > For allocation based flows, it'll be very similar to what we have today:
> > > > >
> > > > > virgl_renderer_resource_create_blob(resid, size, flags, void
> > > > > *resource_create_3d, int ndw, int fd, ..)
> > > > >
> > > > > The kernel will tell virglrenderer the resource id.
> > > >
> > > > Which implies the resource_create_3d isn't an execbuffer but something
> > > > else.  It is not self-contained but depends on the given context,
> > > > specifically the resid generated by the kernel.  You can't do the same
> > > > via SUBMIT_3D.
> > >
> > > Typically, virgl_renderer_submit_cmd will take in
> > > VIRGL_CMD(ALLOCATE_WITH_OBJ_ID, size) since it needs to establish the
> > > [obj_id] --> [struct resource] mapping.
> > >
> > > virgl_renderer_resource_create_blob(resid, *resource_create_3d, ..)
> > > can take in both VIRGL_CMD(ALLOCATE_WITH_OBJ_ID, size) and
> > > VIRGL_CMD(ALLOCATE, size) [OpenGL/generic allocation contexts]).
> > > Internally, they'll go to the same allocation functions.  User-space
> > > will typically choose one or the other.
> > >
> > > Both VIRGL_CMD(ALLOCATE_WITH_OBJ_ID, size) and VIRGL_CMD(ALLOCATE,
> > > size) can be defined in the same header and same protocol namespace,
> > > if that's desired.  Does that address your concern about having two
> > > different protocols?
> > >
> > > >
> > > > > This flow is actually better for udmabuf compared to
> > > > > virgl_renderer_submit_cmd(..) followed by
> > > > > virgl_renderer_resource_create_blob(obj_id,..), since accurate
> > > > > resource creation requires importing the fd into EGL/VK [need import
> > > > > metadata] and not allocation.
> > > >
> > > > I don't think virgl_renderer_resource_create_blob should be involved
> > > > into import at all.
> > >
> > > We'll need to generate the [res_id] --> [struct resource] --> fd
> > > mapping at the very least.
> > >
> > > virgl_renderer_resource_create_blob(resid, obj_id, fd, ..)
> >
> > udmabuf does not use the obj_id path.  There is no host allocation and
> > thus no virgl_renderer_submit_cmd nor obj_id.
>
> I was under the impression virvrenderer uses object IDs in subsequent
> 3D operations (similar to how virglrenderer uses resource IDs)?  So
> every host vkDeviceMemory (once imported for hypervisor managed
> dmabufs) would have an object ID associated with it?
That is right.  But getting udmabuf into Vulkan does not use the
allocation path, but the import path.

The allocation path creates a driver object using execbuffer.  The
exact commands are context-dependent, plain vkAllocateMemory for
Vulkan.

The import path also creates a driver object also using execbuffer.
The exact commands are again context-dependent.  But it is
vkAllocateMemory with VkImportMemoryFdInfoKHR for Vulkan.

The two paths alone, however, are not enough for one context to share
its driver object with another context.  A resource must be created
from the driver object first.  The "fd field in
VkImportMemoryFdInfoKHR in the import path refers to this resource.

udmabuf is similar, except you don't take the allocation path because
there is already an storage.  The import path needs a resource, and it
does not care where the resource is from.  All you need to support
udmabuf is thus a way to create a resource from the udmabuf.

> A consistent definition of the object ID may be possible.  I haven't
> had a chance to go through the virvrenderer code and don't know where
> it is :-/ .  Can you share a link to it?  Looking at the protocol, how
> resources are managed could be helpful.
These branches

https://gitlab.freedesktop.org/olv/mesa/commits/venus-wsi
https://gitlab.freedesktop.org/olv/virglrenderer/commits/venus

can run unmodified vkcube on Intel.  I am not sure how useful they are
though.  They are over vtest, not virtio-gpu.  The focus was the
execbuffer wire format, not resources.




> >
> > It may be added to virglrenderer world via
> > virgl_renderer_resource_create_blob or another variation, but that is
> > not necessarily where import happens (depending on what the overloaded
> > "import" means).  For vk, import happens with execbuffer.  For gl,
> > import can be implicit.
> >
> > >
> > > > IMHO the workflow should be more like this:
> > > >
> > > >   (1) VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE
> > >
> > > Since resources will now be per context, doesn't
> > > VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB imply
> > > VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE for the creation context (if it's
> > > successful)?  Does it make sense to only call
> > > VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE on importing into another context?
> > >
> > > >   (2) VIRTIO_GPU_CMD_SUBMIT_3D, sending import metadata and
> > > >       (if needed) obj_id.
> > >
> > > Yeah, that flow works well for importing fds across multi-process
> > > contexts.  Creation may be a little bit different, since we typically
> > > have all the metadata we need and virglrenderer already has facilities
> > > for import on creation (and it's single process)
> > > [VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE comes after the import]:
> > >
> > > https://github.com/freedesktop/virglrenderer/blob/master/src/virglrenderer.c#L58
> >
> > Resource creation from an external allocation (EGLImage in this case)
> > is cross-device memory sharing, with virglrenderer on the receiving
> > end.  I know crosvm has (had?) a use case for it.  But it should
> > consider moving to use virgl_renderer_export_query (i.e., cross-device
> > memory sharing with virglrenderer on the allocating end).
> >
> >
> > >
> > > >
> > > > cheers,
> > > >   Gerd
> > > >


More information about the virglrenderer-devel mailing list