[virglrenderer-devel] multiprocess model and GL

Chia-I Wu olvaffe at gmail.com
Fri Jan 24 19:30:47 UTC 2020


I agree with everything Gerd said.  But from my prior discussions with
Gurchetan on gitlab, I think a major concern was the number of ioctls,
vbufs, and vq kicks per resource allocation, which strangely did not
get brought up.

I believe RESOURCE_CREATE_V2 solution to the issue is basically

  struct drm_virtgpu_resource_create_v2 {
    __u32 shmem_flags; // whether to allocate a shmem
    __u8 host_main_process_alloc_args[32]; // args that would
otherwise go into ALLOC_x
    __u8 host_per_context_process_alloc_args[32]; // args that would
otherwise go into EXECBUFFER
    ...;
  };

It replaces both INIT_BLOB+ALLOC_SOMETHING and INIT_BLOB+EXECBUFFER by
a single RESOURCE_CREATE_V2.  But that is more a lazy answer than a
right answer IMO.  Style-wise, I don't like it.  And for vk, I don't
like how it implies import/export.  I know it can claim that, when the
userspace specifies zero args, the userspace can make follow-up
ALLOC_x and EXECBUFFER freely.  But that is still a lazy answer.

A variation that is acceptable to me is

  struct drm_virtgpu_resource_create_v2 {
    __u32 shmem_flags; // whether to allocate a shmem
    __u8 host_main_process_alloc_args[32]; // args that would
otherwise go into ALLOC_x
    // EXECBUFFER must be made separately
    ...;
  };

When the scenario does not require a separate execbuffer, there will
be one ioctl.  When it does, there will be two ioctls.  My only
complaint would be the style.  And it does not really deal with the
two-ioctl issue for vk.

A better answer (because of the style) to me is to have

  struct drm_virtgpu_resource_init_blob {
    __u32 shmem_flags; // whether to allocate a shmem
    __u32 host_alloc_type; // whether to make a simple host allocation
that only need size
    // no ALLOC_x
    // use EXECBUFFER for any host allocation that needs more than size
    ...;
  };

gbm resources would also have the two-ioctl issue like vk does.  But
that means we can all focus on the real issue now.

To me, the overhead mainly comes from ioctl calls (i.e., context
switches) and vq kicks (vm exits, qemu/main process wakeups).  And I
am not really bothered by # of ioctl calls when it is merely 1 or 2.
I am more concerned about # of vq kicks.



On Fri, Jan 24, 2020 at 12:34 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
>   Hi,
>
> > * Allocating via execbuffer doesn't really fit with the centralized
> > allocator (gralloc for Android, GBM for Linux/ChromeOS) model.  Two
> > steps + keeping track of object IDs is simply not needed for this
> > case.
>
> If we don't allocate via execbuffer, how will we handle driver-only
> memory (i.e. memory which neither qemu nor kernel need to know about)?
>
> > * Non-virtualized GEM drivers don't really have separate ioctls for
> > the creation of GEM handle and allocation.  In particular, if I call
> > VIRTGPU_MAP after calling drm_virtgpu_resource_init_blob, I'll have to
> > get the backing pages without calling drm_virtgpu_resource_alloc_blob
> > (or throw an error).
>
> Well, the INIT_BLOB call will basically make the object and some basic
> properties known to the host.  The actual allocation happens via
> execbuffer or ALLOC_SOMETHING call, so that would be the place where the
> host would create a gem object.
>
> Yes, VIRTGPU_MAP requests will fail if you try them on a resource after
> INIT but before ALLOC (or execbuffer allocation).
>
> > * Transitioning the Virgl Mesa driver to object IDs would be nice in
> > certain cases, but I suspect it'll take time if someone's inclined
> > (most people seem to think Zink + VK is the way go).
>
> There is no need to transition to object ids.  Kernel and qemu don't
> know about object ids, they handle resources only.  If the virgl driver
> continues to work the way it does (allocate one resource per object) can
> simply continue to use resource ids.
>
> vulkan needs object ids so it can manage driver objects which are not
> known to kernel/qemu and thus don't have a resource id.
>
> > * ALLOC_GBM won't make much sense since GBM isn't available on Nvidia
> > drivers for example -- it's a capability only host/guest user space
> > should worry about.
>
> We can attach a different name to the thing.
>
> > I don't like the idea of more {width, height} ioctls with different
> > format namespaces.
>
> So, what do you want/need for gralloc and (mini)gbm allocations?
>
> And, no, "just pass though an args blob" isn't an valid answer.  Right
> now we have a plan for dumb objects (just size and nothing else), gl
> objects (execbuffer commands, virglrenderer interprets what virgl driver
> sends) and vk objects (execbuffer too, virvkrenderer interprets, most
> likely it will be basically vulkan structs).
>
> cheers,
>   Gerd
>


More information about the virglrenderer-devel mailing list