[Libva] OpenGL Streaming with libva

Chris Healy cphealy at gmail.com
Mon Apr 21 14:54:13 PDT 2014


I am working on a project that involves taking an OpenGL application and
streaming the rendered frames as H.264 wrapped in transport stream/RTP to a
multicast destination.

I'm working with an i7-3517UE embedded Ivy Bridge platform running an
embedded distribution of Linux and 3.13 kernel with no display.  (It's a
server on a commercial aircraft that is intended to stream a moving map to
a bunch of seatback displays.)

I've been working to get this functionality implemented and have gotten to
the point where it works, but I have a feeling it is less than optimal
performance wise.

At the moment, the architecture involves a patch to Mesa to do a
glreadpixels of the frame each time glxswapbuffers is called.  Also in this
patch, once the glreadpixels is completed, we are using "libyuv" to convert
the RGB frame into YUY2 as we believe this is required by libva and placing
the "libyuv" converted output in a buffer that libva can directly use.
>From there on, it's pretty standard ffmpeg bits to get it on the network.

The two areas I'm thinking there may be opportunity are the grabbing of the
buffer from the GPU using glreadpixels and the color space conversion on
the CPU.

For glreadpixels, we applied a patch to Mesa to speed up the process of
moving the data by doing one large memcpy instead of a bunch of little
ones.  (Patch attached.)  This resulted in a much faster glreadpixels, but
none the less a halting of GPU processing while the memory is copied using
the CPU.

For the color space conversion, libyuv does a good job of using the SIMD
instructions of the platform, but none the less, it is still using the CPU.

Is there a better way to get the frames from GPU memory space to libva?
Maybe something involving a zero-copy.  (The application being used is a
binary that I cannot change so using special gl extensions or making any
code changes to the application is not an option.  Only changes to Mesa are
possible.)

Is there a better way to do the color space conversion, if it is in fact
necessary?  I wonder if this is something that can be done with a Mesa
patch to have a shader do the work?  Would that be faster and consume less
bus bandwidth?  What about libva?  I see some VPP functionality, but the
fact that it is referred to as "post" processing makes me feel that it is
intended for after decoding and not targeted at "pre" processing before an
encode.  Is it possible to do the color space conversion with the libva API?

Any recommendations would be appreciated.

Also, for what it's worth, I've posted the Mesa patches at the following
URLs:

http://pastebin.com/XQS11iW4
http://pastebin.com/g00SHFJ1

Regards,

Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/libva/attachments/20140421/e1675b9c/attachment.html>


More information about the Libva mailing list