[virglrenderer-devel] multiprocess model and GL

Chia-I Wu olvaffe at gmail.com
Wed Jan 29 18:15:21 UTC 2020


On Tue, Jan 28, 2020 at 11:40 PM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
>   Hi,
>
> > But that flow can have the nice property where a gem bo always
> > references a fully initialized host resource struct.  Hmm... maybe
> > that flow wins?
>
> That certainly simplifies some things.
>
> > For blob allocations, (B) is more true.  You query the required size
> > for a texture first, and make a blob allocation.
> >
> > (A) should be modified to
> >
> >   (0) request resource id (no size)
> >   (1) execbuffer to get the stride and size
> >   (2) execbuffer to allocate (using the returned size)
> >   (3) init resource + setup gem object (using the returned size)
>
> So (1) + (2) must be separate steps?
> Can (1) results be cached?
> Step (1) is the only one where we must wait for the host to response.
Yes, (1) results can be cached.

>
> > (2) and (3) can happen in any order.  If we swap (2) and (3), we get (B).
> >
> > The benefit of the extra (but amortized) step (0) is that (2) and (3)
> > to happen in exact that order.  A gem bo then always references an
> > initialized host resource struct, unlike in (B).
>
> Yep.  Which simplifies things on the guest side and it it also a good
> place to send attach-backing or map requests to the host.
>
> > > Ideally I'd love to make that an host-side implementation detail.  The
> > > guest should not need know whenever host uses a udmabuf or not.
> > > Especially for small resources it is better to just copy instead of
> > > changing mappings.
> > >
> > > Problem is that guest userspace depends on the host not seeing resource
> > > changes until it explicitly calls a TRANSFER ioctl.  To be exact that is
> > > what I've been told, I don't know the mesa code base.  Because of that
> > > we can't simply hide that from userspace and have the kernel enable
> > > it unconditionally if possible.
> > Yeah, I don't think we can change the semantics from shadowed to
> > direct access.  Mesa virgl does not use dumb buffers and should not be
> > affected.  It probably will break some gl=off setups.
>
> dumb buffers can be switched over, that isn't a problem.  For them it is
> perfectly fine that changes are instantly visible.
>
> The problematic ones are the mesa-created virgl resources.  If the host
> (and possibly the host gpu hardware too) can see any resource updates
> before mesa explicitly calls the TRANSFER ioctl, would that break the
> driver?  Or would mesa set the "you can use udmabufs" flag anyway on all
> resources?
Oh, mesa would break.  It relies on the existence of shadow buffers
currently.  If there was a "you must use udmabuf" flag, mesa could
theoretically use it and allocate staging buffers itself when needed.
But if the flag was just a hint, it would not be enough.

There are also GL_PERSISTENT_BIT buffer objects.  Mesa does not
support them now.  Mesa could use the "you can use udmabuf" flag to
support them.  But the bit is usually set along with
GL_MAP_COHERENT_BIT.  Mesa would still needs the "you must use
udmabuf" flag.


>
> > > > > > It is for COQOS hypervisor which cannot remap allocations to the
> > > > > > guest-requested addresses, as I was told.  The guests must get the
> > > > > > offsets (into this dedicated heap) from the host.
> > > > >
> > > > > Hmm.  I'm wondering how it'll go manage dynamic gpu allocations at
> > > > > all ...
> > > > It does not support injecting gpu allocations into the guests.
> > >
> > > We can't support hostmem then I guess (use transfer ioctls instead).
> > Right.  Direct access can still be achieved with a dedicated heap
> > shared by all guests.  The dedicated heap appears as a "hostmem heap"
> > in the guests, but it is not hostmem.
>
> Hmm.  I'm not sure that is a good trade off security-wise.  A shared heap
> implies guests can look other guests resources ...
Yeah, that could be an issue.  Further discussions or clarifications
of the requirements can happen in

  https://gitlab.freedesktop.org/virgl/virglrenderer/issues/159


>
> cheers,
>   Gerd
>


More information about the virglrenderer-devel mailing list