[Libva] OpenGL Streaming with libva

Xiang, Haihao haihao.xiang at intel.com
Tue Apr 22 22:07:25 PDT 2014


On Mon, 2014-04-21 at 22:06 -0700, Chris Healy wrote: 
> This code helps quite a bit.  I understand how the VPP is doing the
> color space conversion.  I also see all the other fun things the VPP
> code can do.
> 
> 
> How do I determine which of the VPP features work for my particular HW
> platform?  To understand what encodes and decodes are supported for a
> particular HW platform, I just execute: "vainfo".  Is there something
> similar for the VPP features?  How do I determine if I can do a motion
> adaptive deinterlace or not with a Sandy Bridge or Ivy Bridge or
> Haswell or whatever platform?

If VPP is supported by va driver, VAProfileNone / VAEntrypointVideoProc
will be listed in your vainfo log. As for VPP feature, you have to use
vaQueryVideoProcFilters() to get the supported filter list in your code.
Currently vainfo doesn't print out the supported filter list.

> 
> 
> Regards,
> 
> Chris 
> 
> 
> 
> On Mon, Apr 21, 2014 at 6:43 PM, Xiang, Haihao
> <haihao.xiang at intel.com> wrote:
>         On Mon, 2014-04-21 at 14:54 -0700, Chris Healy wrote:
>         > I am working on a project that involves taking an OpenGL
>         application
>         > and streaming the rendered frames as H.264 wrapped in
>         transport
>         > stream/RTP to a multicast destination.
>         
>         
>         Weston has a similar feature which can use libva to encode the
>         screen
>         content (RGB format) in H.264, maybe you can refer to the
>         code.
>         
>         http://lists.freedesktop.org/archives/wayland-devel/2013-August/010708.html 
>         
>         >
>         > I'm working with an i7-3517UE embedded Ivy Bridge platform
>         running an
>         > embedded distribution of Linux and 3.13 kernel with no
>         display.  (It's
>         > a server on a commercial aircraft that is intended to stream
>         a moving
>         > map to a bunch of seatback displays.)
>         >
>         > I've been working to get this functionality implemented and
>         have
>         > gotten to the point where it works, but I have a feeling it
>         is less
>         > than optimal performance wise.
>         >
>         > At the moment, the architecture involves a patch to Mesa to
>         do a
>         > glreadpixels of the frame each time glxswapbuffers is
>         called.  Also in
>         > this patch, once the glreadpixels is completed, we are using
>         "libyuv"
>         > to convert the RGB frame into YUY2 as we believe this is
>         required by
>         > libva and placing the "libyuv" converted output in a buffer
>         that libva
>         > can directly use.
>         
>         
>         For encoding, NV12 is required. The driver will convert the
>         buffer into
>         NV12 if the input buffer is not NV12. 
>         
>         > From there on, it's pretty standard ffmpeg bits to get it on
>         the
>         > network.
>         >
>         >
>         > The two areas I'm thinking there may be opportunity are the
>         grabbing
>         > of the buffer from the GPU using glreadpixels and the color
>         space
>         > conversion on the CPU.
>         >
>         > For glreadpixels, we applied a patch to Mesa to speed up the
>         process
>         > of moving the data by doing one large memcpy instead of a
>         bunch of
>         > little ones.  (Patch attached.)  This resulted in a much
>         faster
>         > glreadpixels, but none the less a halting of GPU processing
>         while the
>         > memory is copied using the CPU.
>         >
>         > For the color space conversion, libyuv does a good job of
>         using the
>         > SIMD instructions of the platform, but none the less, it is
>         still
>         > using the CPU.
>         >
>         > Is there a better way to get the frames from GPU memory
>         space to
>         > libva?  Maybe something involving a zero-copy.  (The
>         application being
>         > used is a binary that I cannot change so using special gl
>         extensions
>         > or making any code changes to the application is not an
>         option.  Only
>         > changes to Mesa are possible.)
>         >
>         >
>         > Is there a better way to do the color space conversion, if
>         it is in
>         > fact necessary?  I wonder if this is something that can be
>         done with a
>         > Mesa patch to have a shader do the work?  Would that be
>         faster and
>         > consume less bus bandwidth?  What about libva?  I see some
>         VPP
>         > functionality, but the fact that it is referred to as "post"
>         > processing makes me feel that it is intended for after
>         decoding and
>         > not targeted at "pre" processing before an encode.  Is it
>         possible to
>         > do the color space conversion with the libva API?
>         >
>         >
>         > Any recommendations would be appreciated.
>         >
>         > Also, for what it's worth, I've posted the Mesa patches at
>         the
>         > following URLs:
>         >
>         > http://pastebin.com/XQS11iW4
>         > http://pastebin.com/g00SHFJ1
>         >
>         > Regards,
>         >
>         > Chris
>         
>         > _______________________________________________
>         > Libva mailing list
>         > Libva at lists.freedesktop.org
>         > http://lists.freedesktop.org/mailman/listinfo/libva
>         
>         
> 
> 




More information about the Libva mailing list