[virglrenderer-devel] multiprocess model and GL

Gerd Hoffmann kraxel at redhat.com
Thu Jan 23 13:25:53 UTC 2020


On Wed, Jan 22, 2020 at 12:19:39PM -0800, Chia-I Wu wrote:
> On Wed, Jan 22, 2020 at 6:51 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
> >
> >   Hi,
> >
> > > So we need to handle vk and gl resources in different ways, or figure
> > > something which works for both.
> >
> > Saw your update @ freedesktop gitlab meanwhile.
> >
> > So, yes, allocating an resource id first should work.  Started new
> > branch, with just the blob API patch for now:
> >   https://gitlab.freedesktop.org/virgl/drm-misc-next/commits/kraxel/memory-v3
> >
> > New DRM_IOCTL_VIRTGPU_RESOURCE_INIT_BLOB ioctl:  Basically allocates
> > IDs (resource id, bo handle).  Also passes size.
> Will this generate a virtio command?

I don't think we need that.

> A per-context process will send (resid, fd) to the main renderer
> process in response to a future execbuffer.  The main renderer process
> needs to know that resid is valid and the per-context process has the
> permission.  Some form of VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE seems
> needed.

Hmm, good point, we might need that for security reasons so qemu/virgl
knows which context created which resource and can properly apply sanity
checks.

My idea was that we simply use the execbuffer context which was used to
create the resource.

> > Actually creating the resource should happen via execbuffer (gl/vk) or
> > via DRM_IOCTL_VIRTGPU_RESOURCE_ALLOC_BLOB (dumb buffer).
> 
> As noted in another mail, the use of execbuffer to allcoate, export,
> and import is a contract between the guest vk driver and the host vk
> per-context processes.  I think gl should adopt a similar practice,
> but it is not a hard requirement.  For example, gl can keep using the
> classic resource create ioctl for allocation and rely on lazy
> export/import.  It can then use resid directly.  Or perhaps gl can use
> execbuffer to allocate, but export/import implicitly.

Yes.  And that way vk/gl driver and host vk/gl also can negotiate any
allocation parameters needed via execbuffer.  Kernel/qemu don't have to
worry much.

> I am fine if DRM_IOCTL_VIRTGPU_RESOURCE_ALLOC_BLOB becomes a flag of
> DRM_IOCTL_VIRTGPU_RESOURCE_INIT_BLOB.

Good idea.

> Resources can have many kinds
> of backing storage
> 
>  - vk resources: INIT_BLOB and then execbuffer

Yes.

>  - gl resources: likely INIT_BLOB and then execbuffer,

Yes for anything not possible using classic resources.

> but also
> classic resource create or other ways?

I'd like to support classic create too, even mixing classic and new
allocation, if we can pull it off (I think we can).  It is easiest for
backward compatibility.

>  - dumb resources: INIT_BLOB and then ALLOC_BLOB

Yes (or flag as discussed above).

>  - gbm resources: could be a renderer detail, but could also use an
> "easy" resource create ioctl?

Hmm, good question.  A separate ioctl for that might be useful, but I
guess in that case I'd prefer to keep ALLOC as separate operation so we
can have ALLOC_DUMB and ALLOC_GBM ...

>  - iovec-only resources: INIT_BLOB w/ flag to create shmem and attach?

What do you mean here exactly?  Classic resource with guest shadow &
access via TRANSFER commands?

>  - iovec+udmabuf resources: INIT_BLOB w/ yet another flag to create a
> udmabuf from iovec?

i.e. shared resource in guest ram?  Yes, I think we need to specify
whenever we want a classic or shared resource in INIT_BLOB.

>  - memfd+udmabuf resources: INIT_BLOB and then ALLOC_BLOB?
>    (memfd here is a host-owned heap that appear as vram rather than
> system ram to the guests)

That is hostmem (i.e. something we allocate on the host, then map into
guest address space).  Not sure why you would use memfd for allocation
here instead of gbm or gl or vk.

> > Right now the ioctls and structs have the bare minimum, I suspect we'll
> > need some more fields added.  So the question is:  What info can get
> > passed to the host via execbuffer?  I guess pretty much anything needed
> > for object creation?  How should the kernel get that metadata (the ones
> > it needs for mapping etc)?  Expect guest userspace pass that to the
> > kernel?  Or better ask the host?
> I believe everything the host driver API needs can be passed via execbuffer.
> 
> As for metadata the kernel needs for mapping, I think it is better to
> ask the host.  The kernel needs (offset-in-pci-bar, memory-type) from
> the host at minimum.

Offset will most likely works the other way around, i.e. the guest
kernel manages the address space and asks the host to map the resource
at a specific place.  Question is how we handle memory type.  The host
could send that in the to the map request reply.  Alternatively
userspace could tell the kernel using a INIT_BLOB flag (it should know
what kind of memory it asks for ...).

cheers,
  Gerd



More information about the virglrenderer-devel mailing list