[PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
Gurchetan Singh
gurchetansingh at chromium.org
Fri Apr 12 01:36:15 UTC 2019
On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
> > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > + struct virtio_gpu_ctrl_hdr hdr;
> > > + __le32 resource_id;
> > > + __le32 format;
> > > + __le32 width;
> > > + __le32 height;
> > > + /* 3d only */
> > > + __le32 target;
> > > + __le32 bind;
> > > + __le32 depth;
> > > + __le32 array_size;
> > > + __le32 last_level;
> > > + __le32 nr_samples;
> > > + __le32 flags;
> > > +};
> >
> >
> > I assume this is always backed by some host side allocation, without any
> > guest side pages associated with it?
>
> No. It is not backed at all yet. Workflow would be like this:
>
> (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
Thanks for the clarification.
>
> You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> then go attach multiple resources to it.
>
> > If so, do we want the option for the guest allocate?
>
> Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> (initially guest allocated only, i.e. what virtio-gpu supports today,
> the plan is to add other allocation types later on).
You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
dma-bufs with this, correct? Let me know if it's a non-goal :-)
If so, we might want to distinguish between memory types (kind of like
memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]
Here's some host-side APIs Chromium might want to use this with workflow:
1) Vulkan seems the most straightforward
virtio_gpu_cmd_memory_create --> create kernel data structure,
vkAllocateMemory on the host or import guest memory into Vulkan,
depending on the memory type
virtio_gpu_cmd_resource_create_v2 --> vkCreateImage +
vkGetImageMemoryRequirements on host
virtio_gpu_cmd_resource_attach_memory --> vkBindImageMemory on host
2) With a guest allocated dma-buf using some new allocation library,
virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
optimal allocation
virtio_gpu_cmd_memory_create --> allocate guest memory pages since
it's guest memory type
virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
resource in kernel, send iovecs to host for bookkeeping
3) With gbm it's a little trickier,
virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
get metadata in return
virtio_gpu_cmd_memory_create --> create kernel data structure, but
don't allocate pages, nothing on the host
virtio_gpu_cmd_resource_attach_memory --> associate memory structure
with resource in kernel, nothing on the host
Is this what you have in mind?
>
> > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > +struct virtio_gpu_resp_resource_info {
> > > + struct virtio_gpu_ctrl_hdr hdr;
> > > + __le32 stride[4];
> > > + __le32 size[4];
> > > +};
> >
> > offsets[4] needed too
>
> That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
I assume the offsets aren't returned by
VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH. How does the guest know at
which offsets in memory will be compatible to share with display,
camera, etc?
Also, do you want to cover the case where the resource is backed by
three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)? If so,
we might want to make resource_create_v2 more opaque like
VkImageMemoryRequirementsInfo2.
>
> cheers,
> Gerd
>
More information about the dri-devel
mailing list