[virglrenderer-devel] multiprocess model and GL

Gurchetan Singh gurchetansingh at chromium.org
Tue Feb 4 03:19:55 UTC 2020


On Fri, Jan 31, 2020 at 7:04 PM Chia-I Wu <olvaffe at gmail.com> wrote:
> > So I wouldn't call the resource create v2 protocol as execbuffer.
> I will say virgl_hw.h is a part of the virgl_protocol.h.  Together,
> they define the virgl protocol.  If you look at the various _FORMAT
> fields in virgl_protocol.h, they refers to the formats defined in
> virgl_hw.h.
>

Disagree here -- virgl_hw.h is the source of truth.  It's possible to
create a resource without including virgl_protocol.h, and we do it all
the time.

For example, without VIRGL_BIND_STAGING or
virgl_supported_format_mask, the guest can't know which type of
resources can be created.

Without VIRGL_CAP_COMPUTE_SHADER, the guest can't know something like
VIRGL_SET_SHADER_BUFFER_* is supported.  All defined in virgl_hw.h

> > I think a more fair way to put this is that some of the virgl protocol
> details are leaked by drm_virtgpu_resource_create.  IMO, a reasonable
> way to plug the leak is to make sure the virgl protocol details are
> contained in EXECBUFFER in resource create v2.

I would put this way: both resource creation and the execbuffer
protocol are defined by the capabilities, and the virglrenderer
protocol is the superset of all capability-defined protocols.

Honestly, the existing interface isn't too bad.  We just need to extend it ...

[snip]
> I think that gives a very pure resource model and resource API, but
> for no practical benefit other than pureness.  Keeping
> VIRTGPU_RESOURCE_FLAG_ALLOC_MASK and adding
> VIRTGPU_RESOURCE_FLAG_ALLOC_CLASSIC (and adding
> drm_virtgpu_resource_create to the union) seem like a good compromise
> to me.  The guest userspace can use the classic alloc type to allocate
> a "classic" host storage.  The host can choose to allocate using gbm
> or other apis.

A pure resource model and API is a good thing!  Honestly, I'd prefer
we not bikeshed over style too much.  We have an implementation that
doesn't require something like ALLOC_CLASSIC but solves the same
problem, so why not go with that?

[snip]
> The main thing that model misses is that VK renderers want to make
> local allocations when they are not to be mapped nor shared.  If that
> model did allow local allocations, would VK renderers use objid to
> identify local allocations and use resid to identify resources?

Local allocations are fine.  Using DRM_IOCTL_VIRTGPU_EXECBUFFER to
allocate and then doing DRM_VIRTGPU_RESOURCE_CREATE_BLOB with
something like VIRTGPU_STRUCTURE_TYPE_BIND_OBJECT_ID with the variadic
arguments when needed should work, no?

[snip]
> That model also encourages implicit exports/imports.  It does not want the
> guest to know that the centralized allocator might export and the
> renderers might import.  But when the renderers do not have enough
> info to import, or when the renderers want to associate an objid with
> a resid, they rely on the guest to explicitly import again.

There's many ways to import/export.  I pointed a way using

VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE
DMA_BUF_SET_NAME/ DMA_BUF_GET_NAME
MODIFY_INCOMING_IMPORT/MODIFY_INCOMING_EXPORT

earlier in this thread.  I think we should hold off that discussion
because the proposed model doesn't necessarily require implicit
export/import, though IMO it's not a huge issue.

[snip]
> I think Gurchetan started from that model and fixed all the issues.
> As a result, the allocation parameters are in a new opaque args[]
> rather than in the existing opaque cmd[].  I like cmd[] more because
>
>  - I use execbuffer in VK renderers to make local allocations and
> import/export resources already, and
>  - I did not start from that model; instead I view execbuffer as the
> way to pass opaque parameters

It makes sense to continue building on the current model, where the
execbuffer protocol is defined by DRM_IOCTL_VIRTGPU_EXECBUFFER, and
not necessarily the only way to submit opaque arguments.  For
virglrenderer, the code paths for resource creation and command buffer
submission are a bit different, so it's nice to keep it that way going
forward.

Note: allocating memory via DRM_IOCTL_VIRTGPU_EXECBUFFER and creating
a virtio-gpu resource out of it later is fine.

[snip]
> It is also possible to treat the non-centralized allocator specially
> and treat it as a global entity.  This allows the userspace to open
> the DRI device once, but talk to its corresponding renderer as well as
> the allocator at the same time.  I am not sure if this is useful to
> the guest.  How to pass the opaque allocation parameters is also to be
> resolved..

Are you concerned about protocol collision?  That is, one protocol for
EXECBUFFER and one protocol for resource creation?

I think it's very solvable, using the capabilities.  Let's say capset
3 (due to the CAPSET_QUERY_FIX) defines the proposed resource creation
protocol.

capset 3, version 1 defines it for context type 0
capset 3, version 2 defines it for context type 1
capset 3, version 3 defines it for context type 2

Similarly, the EXECBUFFER for a specific context be defined by capset
4 + i.  There are many ways to do it ... the execbuffer method to
allocate can indeed be the resource create method to allocate,
depending on the context type.

Again, I want to stress the host capabilities define any protocol, not
necessarily the context type.  The context type defines where the
capabilities are fetched.


More information about the virglrenderer-devel mailing list