[virglrenderer-devel] multiprocess model and GL
Gerd Hoffmann
kraxel at redhat.com
Tue Jan 28 12:32:44 UTC 2020
> > > The userspace does not usually allocate buffers in batches though.
> > > The array version can be useful if the userspace can create a bunch of
> > > size-less resources first and provide the sizes later.
> >
> > Hmm, then we would need a separate ioctl to initialize the gem object,
> > i.e. the workflow would be:
> >
> > GETID_BLOB
> > alloc via execbuffer
> > INIT_BLOB(resource-id,size)
> >
> > One more context switch. On the other hand we don't need to know the
> > size beforehand, so we should be able to skip the extra metadata query
> > step then. Also INIT_BLOB can be done lazily. Not sure how much of the
> > win that actually would be in practice, we would avoid the ioctl only in
> > case userspace creates exportable memory without actually exporting it.
>
> Yeah, the flow will be
>
> IOCTL_GETID_BLOB -> CMD_GETID_BLOB -> vq kick
I don't think we need a kick here. The guest kernel manages resource
ids, so we don't need a response from the host. We should probably give
a different name to the virtio command to make that clear.
> IOCTL_EXECBUFFER -> CMD_SUBMIT_3D -> vq kick
> IOCTL_INIT_BLOB -> no cmd nor vq kick, unless it needs to attach the
> guest shmem backing
Yes, attach-backing would need a command, otherwise this will only
initialize the gem object.
For hostmem object we might request a mapping here, or do it lazily when
userspace actually calls mmap().
> I am in favor of the flow below if it can work
>
> IOCTL_RESOURCE_INIT_BLOB -> CMD_RESOURCE_INIT_BLOB -> no kick
> IOCTL_EXECBUFFER -> CMD_SUBMIT_3D -> vq kick
>
> There will always be one vm exit per resource allocation.
Will work too, but in that case IOCTL_RESOURCE_INIT_BLOB will setup the
gem object so it needs to know the size.
> > Not fully sure about that. I had the impression that Gurchetan Singh
> > wants gbm allocations in gl/vk contexts too.
> We have execbuffer to allocate and export gl or vk objects in the
> host. The guest gbm can use them.
>
> I think he is looking for a simpler path that the guest gbm can use,
> and the host can support using any (gl, vk, or gbm) driver. Having a
> third kind of context with super simple wire format for execbuffer
> satisfies the needs.
Gurchetan Singh, care to clarify?
> > Yep, for the simple case we can do it in a single ioctl.
> >
> > Execbuffer allocations still need three: GETID + execbuffer + INIT.
> > Where GETID can have a LIST variant and execbuffer can probably do
> > multiple allocations in one go too. INIT needs to wait for execbuffer
> > to finish to get the object size.
> The userspace must know the size before it can allocate with
> execbuffer.
Why?
On the host side the gpu driver might imply restrictions on resources,
like requiring stride being a multiple of 64. So lets assume the guest
wants allocate a resource with an odd width.
Workflow (A):
(1) request resource id (no size).
(2) alloc via execbuffer, host returns actual stride and size.
(3) init resource + setup gem object (using returned size).
Workflow (B)
(1) metadata query host to figure what stride and size will be.
(2) initialize resource (with size), setup gem object.
(3) alloc via execbuffer.
IIRC Gurchetan Singh plan for metadata query is/was to have a TRY_ALLOC
flag, i.e. allow a dry-run allocation just to figure size + stride (and
possibly more for planar formats).
So I think we should be able to skip the dry-run and go straight for the
real allocation (workflow A).
> > Why does the host need to know? The host does need (a) the list of
> > pages (sent via attach_backing), and (b) a bool which tells whenever it
> > is ok to directly access the backing storage without explicit transfer
> > commands send by the guest.
> (b) distinguishes the two, no?
>
> When (b) is true, the host creates a udmabuf from the iovec for direct
> access. When (b) is false, the host allocates a dumb buffer and
> requires transfers.
Ideally I'd love to make that an host-side implementation detail. The
guest should not need know whenever host uses a udmabuf or not.
Especially for small resources it is better to just copy instead of
changing mappings.
Problem is that guest userspace depends on the host not seeing resource
changes until it explicitly calls a TRANSFER ioctl. To be exact that is
what I've been told, I don't know the mesa code base. Because of that
we can't simply hide that from userspace and have the kernel enable
it unconditionally if possible.
> > > It is for COQOS hypervisor which cannot remap allocations to the
> > > guest-requested addresses, as I was told. The guests must get the
> > > offsets (into this dedicated heap) from the host.
> >
> > Hmm. I'm wondering how it'll go manage dynamic gpu allocations at
> > all ...
> It does not support injecting gpu allocations into the guests.
We can't support hostmem then I guess (use transfer ioctls instead).
cheers,
Gerd
More information about the virglrenderer-devel
mailing list