[virglrenderer-devel] multiprocess model and GL

Chia-I Wu olvaffe at gmail.com
Tue Feb 4 00:17:23 UTC 2020


On Mon, Feb 3, 2020 at 3:22 PM Chia-I Wu <olvaffe at gmail.com> wrote:
>
> On Mon, Feb 3, 2020 at 12:25 PM Dave Airlie <airlied at gmail.com> wrote:
> >
> > On Sat, 1 Feb 2020 at 13:04, Chia-I Wu <olvaffe at gmail.com> wrote:
> > >
> > > On Fri, Jan 31, 2020 at 4:11 PM Gurchetan Singh
> > > <gurchetansingh at chromium.org> wrote:
> > > >
> > > > On Fri, Jan 31, 2020 at 1:16 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
> > > > >
> > > > >   Hi,
> > > > >
> > > > > > Userspace should decide which allocator to use or use EXECBUFFER,
> > > > > > regardless of context.  It sets the context and therefore should know
> > > > > > what's accurate virtualization for the situation.
> > > > >
> > > > > Ok, so a separate generic-cpu context for the gbm allocator wouldn't
> > > > > work.
> > > > >
> > > > > > > execbuffer interface (but, yes, effectively not much of a difference to
> > > > > > > an args blob).
> > > > > >
> > > > > > Perhaps a dumb question: why is the "execbuffer" protocol/"args"
> > > > > > protcol distinction important?  Using SUBMIT_CMD as a transport could
> > > > > > work, but I envisage the resource creation/sharing protocol as
> > > > > > different from the current execbuffer protocol (i.e, different
> > > > > > header):
> > > > >
> > > > > Hmm?  Why?  What makes the resource creation fundamentally different
> > > > > from other commands for the virglrenderer?
> > > >
> > > > Because they're defined in different headers and used by different ioctls:
> > > >
> > > > execbuffer: https://github.com/freedesktop/virglrenderer/blob/master/src/virgl_protocol.h
> > > > resouce create v1:
> > > > https://github.com/freedesktop/virglrenderer/blob/master/src/virgl_hw.h
> > > >
> > > > (note the formats and binding types -- virglrenderer wouldn't work without it)
> > > >
> > > > So I wouldn't call the resource create v2 protocol as execbuffer.
> > > I will say virgl_hw.h is a part of the virgl_protocol.h.  Together,
> > > they define the virgl protocol.  If you look at the various _FORMAT
> > > fields in virgl_protocol.h, they refers to the formats defined in
> > > virgl_hw.h.
> >
> > FYI:
> >
> > The way it is meant to be is virgl_protocol.h is the execbuffer
> > contents, that userspace drivers fill out for rendering.
> > virgl_hw is meant to be the lowlevel hw interface for use by the kernel.
> >
> > The formats were just a bit messy and should not be seen as an
> > indicator of how to move forward.
> I feel the existing interface works best with a centralized allocator.
> The renderers (those who execute execbuffer contents) do not allocate.
> The allocator and the renderers have different protocols.
>
> It works well when there are only GL renderers.  It is possible to
> change the centralized allocator to use vulkan+export.  GL renderers
> will continue to work with the help of GL_EXT_memory_object_fd.  VK
> renderers can be added and they can import as well.
>
> The main thing that model misses is that VK renderers want to make
> local allocations when they are not to be mapped nor shared.  If that
> model did allow local allocations, would VK renderers use objid to
> identify local allocations and use resid to identify resources?  That
> model also encourages implicit exports/imports.  It does not want the
> guest to know that the centralized allocator might export and the
> renderers might import.  But when the renderers do not have enough
> info to import, or when the renderers want to associate an objid with
> a resid, they rely on the guest to explicitly import again.
>
> I think Gurchetan started from that model and fixed all the issues.
> As a result, the allocation parameters are in a new opaque args[]
> rather than in the existing opaque cmd[].  I like cmd[] more because
>
>  - I use execbuffer in VK renderers to make local allocations and
> import/export resources already, and
>  - I did not start from that model; instead I view execbuffer as the
> way to pass opaque parameters
I should add that I am fine with a non-centralized (i.e., renderers
are allowed to allocate as well) allocator.  It won't be very
different from other renderers.  In fact, it can be treated as a new
type of renderer, and opaque allocation parameters can be passed to it
via execbuffer.

It is also possible to treat the non-centralized allocator specially
and treat it as a global entity.  This allows the userspace to open
the DRI device once, but talk to its corresponding renderer as well as
the allocator at the same time.  I am not sure if this is useful to
the guest.  How to pass the opaque allocation parameters is also to be
resolved..


>
>
> >
> > Dave.


More information about the virglrenderer-devel mailing list