[virglrenderer-devel] multiprocess model and GL

Chia-I Wu olvaffe at gmail.com
Fri Jan 24 18:08:39 UTC 2020


On Fri, Jan 24, 2020 at 1:29 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
> > Unless we have a strong feeling
> > that requiring two virtio commands to create a vk resource are too
> > many, we can worry about it after we have a prototype and some
> > numbers.
>
> Well, performance-wise the number of virtio commands doesn't matter
> much.  The number of context switches does.
Yeah, two virtio commands here imply two ioctls and two vq kicks.

>
> So, for INIT_BLOB, we might consider to use an *array*.  So you can
> initialize a bunch of resources with a single ioctl.  Then pack all
> allocation commands into a single execbuffer and run that.  Voila,
> you can initialize a bunch of resources with only two system calls.
The userspace does not usually allocate buffers in batches though.
The array version can be useful if the userspace can create a bunch of
size-less resources first and provide the sizes later.

>
> > > Hmm, good question.  A separate ioctl for that might be useful, but I
> > > guess in that case I'd prefer to keep ALLOC as separate operation so we
> > > can have ALLOC_DUMB and ALLOC_GBM ...
> > If we keep separated operations, I would prefer to have
> > ALLOC_TYPED(w,h,format,usage,...) and ALLOC_BLOB(size,usage) instead.
> > Both per-context processes and the main process can allocate.
>
> Yes, we'll need both global (main process) and per-context resources.
>
> > I think we want execbuffer for per-context process allocations.
>
> Agree.
>
> > gbm resources here, and dumb resources, udmabuf resources, and classic
> > resources belong to main process allocations.
>
> Not sure about gbm.  But, yes, dumb + classic are global.
Yeah, gbm objects can use the init_blob+execbuffer trick as well.  The
host does not necessarily need to run the gbm context in a per-context
process.

That implies yet another kind of contexts.  I guess the kernel can
provide a SET_FEATURE (or INIT_CONTEXT) ioctl with an opaque `__u32
kind' parameter.  The parameter decides the wire format of execbuffer
which the kernel does not care.

>
> >  It would be nice to have
> >
> >  - ALLOC_BLOB: allocate a blob storage in host main process
> >  - ALLOC_TYPED: allocate a typed storage in host main process
> >  - ALLOC_IOVEC: allocate a udmabuf from guest iovec in host main process
>
> size is needed for INIT.
> storage too (the kernel needs to know whenever it should allocate guest
> memory pages or reserve some address space in the vram bar).
If the kernel can provide CREATE_BLOB_LIST, to create a list of
id-only gem bos first, and INIT_BLOB, to initialize an individual gem
bo, it can be very useful.  CREATE_BLOB_LIST generate a virtio
command, but the overhead is amortized.  INIT_BLOB generates the
ATTACH_BACKING command only when a shmem is requested.

> ALLOC will trigger *host* allocations.
Exactly.

If we are ok with using execbuffer for gbm objects, we might as well
say that when a ALLOC_FOO is needed and requires any argument,
execbuffer should be used instead.  Then ALLOC_BLOB or ALLOC_IOVEC can
be merged into INIT_BLOB

struct drm_virtgpu_resource_init_blob {
__u64 size;         /* in */
__u32 flags;        /* in */
__u32 host_flags; /* in, no host flag defined yet */
__u32 host_alloc; /* in, NONE, ALLOC_BLOB or ALLOC_IOVEC */
__u32 bo_handle;    /* out */
__u32 res_handle;   /* out */
__u32 pad;
};

>
> > as ways for main process allocations.  Then we don't care which host
> > driver the main process uses.
> >
> > > >  - iovec-only resources: INIT_BLOB w/ flag to create shmem and attach?
> > >
> > > What do you mean here exactly?  Classic resource with guest shadow &
> > > access via TRANSFER commands?
> > It is VREND_RESOURCE_STORAGE_GUEST in virglrenderer.  Such resources
> > have no host storage and do not support transfers.  The host must
> > access it through iovec and is CPU-only.
>
> Oh.  Didn't know resources without host storage can exist.  Doesn't
> virgl_renderer_resource_create() allocate host storage for each
> resource?
That one is an exception.  IMO, it is a hack but is needed for
performance.  v2 should support the use case properly.


> > > >  - iovec+udmabuf resources: INIT_BLOB w/ yet another flag to create a
> > > > udmabuf from iovec?
> > >
> > > i.e. shared resource in guest ram?  Yes, I think we need to specify
> > > whenever we want a classic or shared resource in INIT_BLOB.
> >
> > I think INIT_BLOB can have flags to control whether the guest shmem is
> > needed (or emulated coherency is needed).  This way,
> >
> >  - INIT_BLOB+shmem+ALLOC_TYPED is equivalent to the classic RESOURCE_CREATE
> >  - INIT_BLOB+shmem+ALLOC_IOVEC is equivalent to RESOURCE_CREATE_SHARED
>
> BTW: I'm not sure any more RESOURCE_CREATE_SHARED is that a great plan.
Not exactly RESOURCE_CREATE_SHARED, but the mechanism can be used to
achieve something similar to DRM_IOCTL_I915_GEM_USERPTR in the future.

>
> I think it's more useful to integrate that into to new INIT/ALLOC BLOB
> ioctls/virtio commands.
Sure, as long as the host can distinguish them (to create a new host
storage for dumb or to create a udmabuf from iovec).


>
> >  - INIT_BLOB+shmem is VREND_RESOURCE_STORAGE_GUEST
> >  - INIT_BLOB+shmem+execbuffer is non-coherent vk resources
> >  - INIT_BLOB+execbuffer is coherent vk resources
> >
> > > >  - memfd+udmabuf resources: INIT_BLOB and then ALLOC_BLOB?
> > > >    (memfd here is a host-owned heap that appear as vram rather than
> > > > system ram to the guests)
> > >
> > > That is hostmem (i.e. something we allocate on the host, then map into
> > > guest address space).  Not sure why you would use memfd for allocation
> > > here instead of gbm or gl or vk.
> > Yes, this is similar to hostmem to the guest, except that the host
> > main process does not allocate from any driver but from the memfd heap
>
> That's a host implementation detail which doesn't matter much for the
> virtio protocol.
>
> > > Offset will most likely works the other way around, i.e. the guest
> > > kernel manages the address space and asks the host to map the resource
> > > at a specific place.  Question is how we handle memory type.  The host
> > > could send that in the to the map request reply.  Alternatively
> > > userspace could tell the kernel using a INIT_BLOB flag (it should know
> > > what kind of memory it asks for ...).
> > Yeah, it makes more sense for the guest kernel to manage the address
> > space.  Thanks.  I mixed this up with shared memfd heap above where I
> > think guests need to get offsets from host (maybe I am wrong there
> > too?)
>
> It's the hosts job to manage the heap and the offsets, the guest doesn't
> need to know.  The guest just asks for this (host) resource being mapped
> at that place (in guest address space).
It is for COQOS hypervisor which cannot remap allocations to the
guest-requested addresses, as I was told.  The guests must get the
offsets (into this dedicated heap) from the host.

>
> > The userspace should know the memory type when INIT_BLOB is called.
> > But I don't know if the kernel should trust the userspace and
> > potentially create conflicting memory types with the host.
>
> Yes, I think it's better to return that from the host.  That way we can
> make sure guest kernel and hypervisor agree even in case the host has
> to apply some quirks which guest userspace might not know about ...
Agreed.
>
> cheers,
>   Gerd
>


More information about the virglrenderer-devel mailing list