[virglrenderer-devel] multiprocess model and GL
Chia-I Wu
olvaffe at gmail.com
Fri Jan 31 20:00:06 UTC 2020
On Fri, Jan 31, 2020 at 2:41 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
> Hi,
>
> memory-v4 branch pushed.
>
> Went with the single-ioctl approach. Renamed back to CREATE as we don't
> have separate "allocate resource id" and "initialize resource" steps any
> more.
>
> So, virgl/vulkan resources would be created via execbuffer, get an
> object id attached to them so they can be referenced, then we'll create
> a resource from that. The single ioctl which will generate multiple
> virtio commands.
Does it support cmd_size==0 and object_id!=0? That is useful for
cases where execbuffer and resourece_create happen at different times.
>
> Dumb resources will be created with the same ioctl, just with the DUMB
> instead of the EXECBUFFER flag set. The three execbuffer fields will be
> unused.
I think the three execbuffer fields can be in a union:
union {
struct {
the-three-fields;
} execbuffer;
__u32 pads[16];
};
The alloc type decides which of the fields, if any, is used. This
gives us some leeway when a future alloc type needs something else.
FWIW, pads[16] is chosen such that it can fit a
drm_virtgpu_resource_create in case we decide to provide a migration
path for resource create v1 users.
>
> To be discussed:
>
> (1) Do we want/need both VIRTGPU_RESOURCE_FLAG_STORAGE_SHARED_ALLOW and
> VIRTGPU_RESOURCE_FLAG_STORAGE_SHARED_REQUIRE?
The host always have direct access to the guest shmem. I can see
three cases when a host accesses the shmem
- transfers data into and out of the guest shmem
- direct access in CPU domain (CPU access or GPU access w/ userptr)
- direct access in device domain (GPU access w/ udmabuf)
I think the information passed to the host can be
- VIRTGPU_RESOURCE_FLAG_STORAGE_SHADOW
- VIRTGPU_RESOURCE_FLAG_STORAGE_SHARED_CPU
- VIRTGPU_RESOURCE_FLAG_STORAGE_SHARED_DEVICE
VIRTGPU_RESOURCE_FLAG_STORAGE_SHADOW says the host can access the
shmem only in response to transfer commands. It is not very useful
and can probably be removed.
VIRTGPU_RESOURCE_FLAG_STORAGE_SHARED_CPU says the host can and must
access the shmem in CPU domain. The kernel always maps the shmem
cached and the userspace knows it is coherent.
VIRTGPU_RESOURCE_FLAG_STORAGE_SHARED_DEVICE says the host can and must
access the shmem in device domain. The userspace can ask the kernel
to give it a coherent mapping or not. For a coherent mapping, it can
be wc or wb depending on the platform. For a incoherent mapping, the
userspace can use transfers to flush/invalidate cpu cache.
I wonder if we have a solid use case for
VIRTGPU_RESOURCE_FLAG_STORAGE_SHARED_DEVICE. If not, we can leave it
out for now.
>
> (2) How to integrate gbm/gralloc allocations best? Have a
> VIRTGPU_RESOURCE_FLAG_ALLOC_GBM, then pass args in the execbuffer?
> Or better have a separate RESOURCE_CREATE_GBM ioctl/command and
> define everything we need in the virtio spec?
Instead of RESOURCE_CREATE_GBM, I would replace the three execbuffer
fields with a union, and add VIRTGPU_RESOURCE_FLAG_ALLOC_GBM and a new
field to the union.
If we were to pass args in the execbuffer, what would be wrong with a
generic gpu context type that is allocation-only?
> cheers,
> Gerd
>
More information about the virglrenderer-devel
mailing list