[virglrenderer-devel] coherent memory access for virgl

Gurchetan Singh gurchetansingh at chromium.org
Tue Oct 16 03:53:12 UTC 2018


On Mon, Oct 15, 2018 at 12:46 AM Gerd Hoffmann <kraxel at redhat.com> wrote:
>
> On Fri, Oct 12, 2018 at 08:54:52PM -0700, Gurchetan Singh wrote:
> > > Side note: I also have some experimental code for vga, in that case the
> > > vga emulation can take care to allocate the vga pci memory bar backing
> > > storage using memfd so no specific configuration is needed in that case.
> >
> > I would be interested in seeing this experimental code, since crosvm
> > has limited memfd support ATM and this seems the easiest way to get
> > udmabuf running..
>
> https://git.kraxel.org/cgit/qemu/commit/?h=sirius/udmabuf&id=5b60a70ac580add080bd0f69938d063cddf005ef
>
> It will only use the dmabuf to pass it to spice-client for (full
> framebuffer) display.  There is no easy way to use that for gem
> objects.

Okay, so that method can't be used to back objects created with
DRM_VIRTGPU_RESOURCE_CREATE.  I want to back all multi-level textures
with memfd pages, so we can use udmabuf and eliminate a copy.

Is it possible to add another TTM region in kernel driver backed by
memfd pages?  Instead of initiating the region with TTM_PL_VRAM (like
done in [1]), we could initiate it with TTM_PL_FLAG_PRIV.

If it is possible, is there any drawback with that method compared to
backing all guest RAM with memfd?

[1] https://git.kraxel.org/cgit/linux/tree/drivers/gpu/drm/virtio/virtgpu_ttm.c?h=drm-virtio-ttm&id=9b154e4c79c0a42155831ffa86de8ba017637601#n415

> > In the host map code, TTM considers the PCI region as a VRAM pool.
> > Can there be another memfd related pool as well, so all virtgpu guest
> > allocations are memfd backed?
>
> I don't think ttm supports multiple vram pools for a device.
>
> > > > I'm fine with both approaches, as long as we allocate the optimal
> > > > buffer for a given scenario.
> > >
> > > Is there anything else (beside GBM_BO_USE_*) the host should know about
> > > a resource to pick the best modifier?
> >
> > Not sure at this moment.  My concerns:
> >
> > Is GBM_USE_SCANOUT sufficient?  Say QEMU is fullscreen, and the device
> > supports a primary compressed plane and overlay linear plane.  Say you
> > have two buffers in the emulator -- one full screen and one window.
> > Both of the buffers specify GBM_USE_SCANOUT.  Do we look at heuristics
> > (buffer size) to make the determination of how to allocate?
>
> I wouldn't worry too much about this.  One advantage we have when the
> guest is not involved in picking the modifiers is that we can
> re-allocate the resource (unless it has an active mapping) with other
> modifiers.  Also useful when switching qemu between fullscreen/windowed
> (as discussed elsewhere in the thread).

The guest needs to be notified about buffer size changes, no?  It
needs to redraw given the larger buffer size.

Anyways, I filed this issue for the related wayland questions:

https://gitlab.freedesktop.org/wayland/wayland/issues/60

>
> > > One more question:  gbm_bo_map() allows to map only a region of the bo
> > > instead of the whole thing.  Is that important?
> >
> > Yes, virgl Mesa implements that with a TRANSFER_FROM_HOST and a map.
> > Only the data specified in the box is updated.
>
> Sure, the api must be supported.  The question is whenever we should
> support partial mapping of ressources at virtio-drm ioctl and virtio
> protocol level ...
>
> Looking at the intel driver I see that it always maps the complete bo.
> Doing the same for virtio would simplify things alot.
>
> Related:  The intel driver also never unmaps.  It keeps the mappings
> cached until the bo is destroyed.   So we could do the same in virtio,
> to reduce the mapping overhead.  But probably not on all hardware.
> I suspect that implementation detail is not exposed at gbm API level?

Yes, it's not exposed at the API level and no GEM drivers offer
partial mappings.  The box is mainly for not copying the entire region
when using a shadow buffer.

In minigbm, we tend to keep mappings around as well (though not in all cases):

https://chromium.googlesource.com/chromiumos/platform/minigbm/+/master/drv.c#525

It wouldn't be too much work to add an explicit gbm_bo_flush(..) or
gbm_bo_invalidate(...) (especially since some drivers have explicit
ioctls, like DRM_MSM_GEM_CPU_PREP / DRM_MSM_GEM_CPU_FINI) in minigbm
for suitable drivers (most of them, since it's most use
uncached/write-combine system memory).

>
> cheers,
>   Gerd
>


More information about the virglrenderer-devel mailing list