[virglrenderer-devel] coherent memory access for virgl

Gurchetan Singh gurchetansingh at chromium.org
Tue Sep 25 03:49:54 UTC 2018


On Tue, Sep 4, 2018 at 5:16 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
>   Hi,
>
> > > Also: how frequently will these objects be allocated/freed?
> > >
> > > I suspect GL_ARB_buffer_storage not so often.
> > >
> > > But vulkan?  It probably wants pretty much everything allocated that
> > > way.  I expect it to be designed with the memory management capabilities
> > > of modern GPUs in mind.  Don't know much about vulkan though ...
> >
> > It's a good practice to suballocate because allocation is considered
> > expensive, but I guess that doesn't automatically mean that we can make
> > allocations 10x slower...
>
> So the mesa driver will ask for larger chunks and then hand out small
> allocations from that, right?  Would something like this be enough?
>
>     struct virtio_gpu_resource_create_coherent {
>         struct virtio_gpu_ctrl_hdr hdr;
>         uint32_t resource_id;
>         uint32_t flags;
>         uint64_t offset; /* into pci bar */
>         uint64_t size;
>     };
>
> Or would we need the individual allocations be visible at virtio level,
> so we can tell virglrenderer, with something like this:
>
>     struct virtio_gpu_pool_create_coherent {
>         struct virtio_gpu_ctrl_hdr hdr;
>         uint32_t pool_id;
>         uint32_t flags;
>         uint64_t offset; /* into pci bar */
>         uint64_t size;
>     };
>
>     struct virtio_gpu_resource_create_coherent {
>         struct virtio_gpu_ctrl_hdr hdr;
>         uint32_t pool_id;
>         uint32_t resource_id;
>         uint64_t offset; /* into pool */
>         uint64_t size;
>     };
>
> Who will do the actual allocations?  I expect we need new virglrenderer
> functions for that?

The decision to back memory via iovecs or host memory is up to the
VMM.  Then the reverse may be true -- virglrenderer may need to be
notified about the type of memory used (so copies like
read_transfer_data /  write_transfer_data can be avoided).

A related question: are we going to also expose host memory to the
guest for the non-{GL_MAP_COHERENT_BIT,
VK_MEMORY_PROPERTY_HOST_COHERENT_BIT} cases?

It could help in a wide variety of use cases:

1) Upload would be less costly for the following cases:

GL_TRANSFORM_FEEDBACK_BUFFER
GL_ARRAY_BUFFER (frequently changes during games)
GL_ELEMENT_ARRAY_BUFFER (frequently changes during games)
GL_TEXTURE_BUFFER
GL_UNIFORM_BUFFER

2) On certain platforms and in certain cases, we can expose render
target and texture memory directly as well (using gbm_bo_map(..) [on
integrated GPUs it often doesn't perform a copy] + GLeglImageOES).




>
> > > I can look into this once I'm done with the vacation backlog.
> >
> > Do you want any help with prototyping this?
>
> So far I have just a single patch adding a (not yet used) pci bar.
>
> https://git.kraxel.org/cgit/qemu/log/?h=sirius/virtio-gpu-hostmap
>
> Before continuing I need a more clear picture how the allocation
> workflow is going to work (see questions above), for both
> GL_ARB_buffer_storage and vulkan.  So, yes, help with that is welcome.
>
> cheers,
>   Gerd
>
> _______________________________________________
> virglrenderer-devel mailing list
> virglrenderer-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/virglrenderer-devel


More information about the virglrenderer-devel mailing list