[virglrenderer-devel] multiprocess model and GL

Chia-I Wu olvaffe at gmail.com
Thu Jan 23 21:56:08 UTC 2020


On Thu, Jan 23, 2020 at 5:26 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
> On Wed, Jan 22, 2020 at 12:19:39PM -0800, Chia-I Wu wrote:
> > On Wed, Jan 22, 2020 at 6:51 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
> > >
> > >   Hi,
> > >
> > > > So we need to handle vk and gl resources in different ways, or figure
> > > > something which works for both.
> > >
> > > Saw your update @ freedesktop gitlab meanwhile.
> > >
> > > So, yes, allocating an resource id first should work.  Started new
> > > branch, with just the blob API patch for now:
> > >   https://gitlab.freedesktop.org/virgl/drm-misc-next/commits/kraxel/memory-v3
> > >
> > > New DRM_IOCTL_VIRTGPU_RESOURCE_INIT_BLOB ioctl:  Basically allocates
> > > IDs (resource id, bo handle).  Also passes size.
> > Will this generate a virtio command?
>
> I don't think we need that.
>
> > A per-context process will send (resid, fd) to the main renderer
> > process in response to a future execbuffer.  The main renderer process
> > needs to know that resid is valid and the per-context process has the
> > permission.  Some form of VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE seems
> > needed.
>
> Hmm, good point, we might need that for security reasons so qemu/virgl
> knows which context created which resource and can properly apply sanity
> checks.
>
> My idea was that we simply use the execbuffer context which was used to
> create the resource.
I am not sure if that could work because the main process would not
know the existence of such resource.

I think a virtio command needs to be generated.  It does not
necessarily need to result in a vq kick.  Or it can request a range of
resource ids, similar to glGenTextures, such that not every
DRM_IOCTL_VIRTGPU_RESOURCE_INIT_BLOB needs to generate the command.
But for the moment, I am fine with keeping things simple by letting
each DRM_IOCTL_VIRTGPU_RESOURCE_INIT_BLOB generate a
VIRTIO_GPU_CMD_RESOURCE_INIT_BLOB.  Unless we have a strong feeling
that requiring two virtio commands to create a vk resource are too
many, we can worry about it after we have a prototype and some
numbers.

>
> > > Actually creating the resource should happen via execbuffer (gl/vk) or
> > > via DRM_IOCTL_VIRTGPU_RESOURCE_ALLOC_BLOB (dumb buffer).
> >
> > As noted in another mail, the use of execbuffer to allcoate, export,
> > and import is a contract between the guest vk driver and the host vk
> > per-context processes.  I think gl should adopt a similar practice,
> > but it is not a hard requirement.  For example, gl can keep using the
> > classic resource create ioctl for allocation and rely on lazy
> > export/import.  It can then use resid directly.  Or perhaps gl can use
> > execbuffer to allocate, but export/import implicitly.
>
> Yes.  And that way vk/gl driver and host vk/gl also can negotiate any
> allocation parameters needed via execbuffer.  Kernel/qemu don't have to
> worry much.
>
> > I am fine if DRM_IOCTL_VIRTGPU_RESOURCE_ALLOC_BLOB becomes a flag of
> > DRM_IOCTL_VIRTGPU_RESOURCE_INIT_BLOB.
>
> Good idea.
>
> > Resources can have many kinds
> > of backing storage
> >
> >  - vk resources: INIT_BLOB and then execbuffer
>
> Yes.
>
> >  - gl resources: likely INIT_BLOB and then execbuffer,
>
> Yes for anything not possible using classic resources.
>
> > but also
> > classic resource create or other ways?
>
> I'd like to support classic create too, even mixing classic and new
> allocation, if we can pull it off (I think we can).  It is easiest for
> backward compatibility.
Sure.  As long as we have a good feeling about INIT_BLOB+execbuffer
for gl resources, I would like mesa virgl to keep using the classic
method while we work things out for vulkan.

> >  - dumb resources: INIT_BLOB and then ALLOC_BLOB
>
> Yes (or flag as discussed above).
>
> >  - gbm resources: could be a renderer detail, but could also use an
> > "easy" resource create ioctl?
>
> Hmm, good question.  A separate ioctl for that might be useful, but I
> guess in that case I'd prefer to keep ALLOC as separate operation so we
> can have ALLOC_DUMB and ALLOC_GBM ...
If we keep separated operations, I would prefer to have
ALLOC_TYPED(w,h,format,usage,...) and ALLOC_BLOB(size,usage) instead.

Both per-context processes and the main process can allocate.  I think
we want execbuffer for per-context process allocations.  gbm resources
here, and dumb resources, udmabuf resources, and classic resources
belong to main process allocations.  It would be nice to have

 - ALLOC_BLOB: allocate a blob storage in host main process
 - ALLOC_TYPED: allocate a typed storage in host main process
 - ALLOC_IOVEC: allocate a udmabuf from guest iovec in host main process

as ways for main process allocations.  Then we don't care which host
driver the main process uses.

> >  - iovec-only resources: INIT_BLOB w/ flag to create shmem and attach?
>
> What do you mean here exactly?  Classic resource with guest shadow &
> access via TRANSFER commands?
It is VREND_RESOURCE_STORAGE_GUEST in virglrenderer.  Such resources
have no host storage and do not support transfers.  The host must
access it through iovec and is CPU-only.

>
> >  - iovec+udmabuf resources: INIT_BLOB w/ yet another flag to create a
> > udmabuf from iovec?
>
> i.e. shared resource in guest ram?  Yes, I think we need to specify
> whenever we want a classic or shared resource in INIT_BLOB.

I think INIT_BLOB can have flags to control whether the guest shmem is
needed (or emulated coherency is needed).  This way,

 - INIT_BLOB+shmem+ALLOC_TYPED is equivalent to the classic RESOURCE_CREATE
 - INIT_BLOB+shmem+ALLOC_IOVEC is equivalent to RESOURCE_CREATE_SHARED
 - INIT_BLOB+shmem is VREND_RESOURCE_STORAGE_GUEST
 - INIT_BLOB+shmem+execbuffer is non-coherent vk resources
 - INIT_BLOB+execbuffer is coherent vk resources

>
> >  - memfd+udmabuf resources: INIT_BLOB and then ALLOC_BLOB?
> >    (memfd here is a host-owned heap that appear as vram rather than
> > system ram to the guests)
>
> That is hostmem (i.e. something we allocate on the host, then map into
> guest address space).  Not sure why you would use memfd for allocation
> here instead of gbm or gl or vk.
Yes, this is similar to hostmem to the guest, except that the host
main process does not allocate from any driver but from the memfd heap

  https://gitlab.freedesktop.org/virgl/virglrenderer/issues/159

It is useful in an environment where the "host driver" runs inside
another guest.  The driver guest and other application guests can
share this memfd heap.  It is also possible for the host to use a
special heap that calls set_memory_wc and tells the guests to map WC
to avoid conflicting memory types.

> > > Right now the ioctls and structs have the bare minimum, I suspect we'll
> > > need some more fields added.  So the question is:  What info can get
> > > passed to the host via execbuffer?  I guess pretty much anything needed
> > > for object creation?  How should the kernel get that metadata (the ones
> > > it needs for mapping etc)?  Expect guest userspace pass that to the
> > > kernel?  Or better ask the host?
> > I believe everything the host driver API needs can be passed via execbuffer.
> >
> > As for metadata the kernel needs for mapping, I think it is better to
> > ask the host.  The kernel needs (offset-in-pci-bar, memory-type) from
> > the host at minimum.
>
> Offset will most likely works the other way around, i.e. the guest
> kernel manages the address space and asks the host to map the resource
> at a specific place.  Question is how we handle memory type.  The host
> could send that in the to the map request reply.  Alternatively
> userspace could tell the kernel using a INIT_BLOB flag (it should know
> what kind of memory it asks for ...).
Yeah, it makes more sense for the guest kernel to manage the address
space.  Thanks.  I mixed this up with shared memfd heap above where I
think guests need to get offsets from host (maybe I am wrong there
too?)

The userspace should know the memory type when INIT_BLOB is called.
But I don't know if the kernel should trust the userspace and
potentially create conflicting memory types with the host.


> cheers,
>   Gerd
>


More information about the virglrenderer-devel mailing list