[virglrenderer-devel] Virgl Multi-Head Support

Darrell Walisser darrell.walisser at gmail.com
Sun Mar 27 14:07:42 UTC 2022


I've been investigating what it would take to get multi-head working,
specifically with nvidia proprietary driver. The state of affairs
seems to be that nv does not actually support DMA buffer export in
some incarnation (perhaps texture-backed buffers), despite advertising
it in the extensions list. This prevents spice from working which
would be the ideal multi-head solution.

If you want to test this, I found this canonical example,

    https://gitlab.com/blaztinn/dma-buf-texture-sharing
    https://blaztinn.gitlab.io/post/dmabuf-texture-sharing/

Which shows that eglExportDMABUFImageMESA() will return EGL_BAD_MATCH,
this is also the same error returned in qemu.

It would be nice if someone could confirm my theory as I have not
found anything definitive. The next step would be filing a bug report
with team green.

Assuming the aforementioned is unresolveable, SDL2 seems to be the
next-best solution as it supports multiple windows and GL context
sharing. I can see context sharing is enabled in the SDL2 ui, perhaps
it works already and I don't have the proper incantation?

I have been using -device virtio-vga-gl,max_outputs=2 -display sdl,gl=on

A second output indeed shows up in the guest, the scannouts=2 in the
kernel log and there is a disconnected output "Virtual-2" in xrandr,
with no modes available.

I fiddled with xrandr quite a bit, adding a mode and binding it to the
disconnected output, and setting the mode. When the mode is set, a new
window is created however nothing is rendered to it. So it seems very
close! The output remains in the disconnected state in xrandr.

I would appreciate any guidance on how to move forward, for the moment
I am adding more logging to qemu to get a better understanding of the
code.


More information about the virglrenderer-devel mailing list