[Spice-devel] [PATCH RFC 00/12] Remote Virgl support

Frediano Ziglio fziglio at redhat.com
Mon Jul 18 09:34:26 UTC 2016


> 
> On Fr, 2016-07-15 at 14:49 +0100, Frediano Ziglio wrote:
> > This patch is an improve to last one. There are still many work
> > to be done. The main reason I'm posting is to discuss the Qemu
> > API changes ("Define a new interface for Qemu to pass texture"
> > patch). This code add dependency to EGL directly.
> 
> I'm not convinced it is a good idea to pass around texture ids instead
> of dma bufs, especially as we'll also receive dma-bufs in the future
> (intel-vgpu will export the guest display as dma-buf).
> 
> > The main idea is still extracting raw data and passing to the
> > normal flow (display_channel_process_draw).
> 
> What is the state of the hardware supported encoding?
> How can we pass buffers to the hardware encoder?
> 

The state here is a bit of a mess.
One reason to pass texture instead of dma buffers is that we use gstreamer
and gstreamer for hardware acceleration uses VAAPI and one way to pass
frames to VAAPI is to use GL textures. There is a quite strong requirement
that dma buffers should be mmap-able in gstreamer but this is not true.
Note that in theory VAAPI can import DRM prime/dma buffers however this is
currently not exported/implemented by gstreamer-vaapi.

The current status of hardware encoding is a bit confusing.
On one side there is VAAPI which should be an independent (from card brand)
library to use hardware decoding/encoding however some vendor (like Nvidia)
seems not really keen on supporting it for encoding (the binding for Nvidia
is using vdpau which is limited to decoding). VAAPI was proposed by Intel
so for Intel is really good.
On the other side we could have patent/licensing issues due to the fact that
main encoding supported (basically mpeg2, h264, hevc) all have patents while
more open encoding (vp8, vp9) are not currently widely supported.

> > Changes from last version:
> > - this set supports all cards using a different protocol from Qemu
> >   that now can pass EGL information (display and context) and
> >   texture directly. This allows spice-server to choose dma buffers
> >   or just GL data;
> 
> I think we should decouple the scanout buffer passing and the egl
> context handling.
> 
> qemu already has functions for context management in ui/console.h:
> 
>   QEMUGLContext dpy_gl_ctx_create(QemuConsole *con,
>                                   QEMUGLParams *params);
>   void dpy_gl_ctx_destroy(QemuConsole *con, QEMUGLContext ctx);
>   int dpy_gl_ctx_make_current(QemuConsole *con, QEMUGLContext ctx);
>   QEMUGLContext dpy_gl_ctx_get_current(QemuConsole *con);
> 
> We should use them to create a EGL context for spice-server.  I can
> think of two ways to do this:
> 
>  (1) Extend the display channel interface to have callbacks for these
>      (and thin wrapper functions which map spice display channel to
>      QemuConsole so spice-server doesn't need to worry about that).
> 

So you mean a way for spice-server display channel to call some Qemu
function, right?

>  (2) Have qemu create one context per spice-server (or per display
>      channel) and create a new spice_server_set_egl_context() function
>      to hand over the context to spice-server.
> 

Yes, I added a spice_qxl_gl_init function which set display and context.
Note that we need display too. In order to support gstreamer GL upload
(I still have to learn how to do it) passing display/context is required.
This as VAAPI and gstreamer will setup their contexts too.

> (2) is simpler, (1) is more flexible.  Not sure we actually need the
> flexibility though.
> 
> cheers,
>   Gerd
> 
> 

Probably I feel more confident having more flexibility as is not clear
the resulting information we need.
For instance one issue is how to initialize VAAPI which is encapsulated
inside gstreamer in order to setup the library when we don't have a
X/Wayland display. This to support server/daemon setups.
VAAPI is able to initialize a VADisplay (which is the first handle
representing basically the card) using the DRM handle but this should
be the same card used by Qemu.

Another stuff I would like to have changed in Qemu is the number of
pending frames sent by it. I think that a single frame is not enough.
There should be at least 2/3 frames. The reason is that streaming/network
requires more time and would be better if there could be one frame which
is encoding/pending and one which is new which could be replaced by a
new frame that will arrive if encoding/network is not fast enough.

Frediano


More information about the Spice-devel mailing list