<div dir="ltr"><div><div>This code helps quite a bit. I understand how the VPP is doing the color space conversion. I also see all the other fun things the VPP code can do.<br><br></div>How do I determine which of the VPP features work for my particular HW platform? To understand what encodes and decodes are supported for a particular HW platform, I just execute: "vainfo". Is there something similar for the VPP features? How do I determine if I can do a motion adaptive deinterlace or not with a Sandy Bridge or Ivy Bridge or Haswell or whatever platform?<br>
<br></div>Regards,<br><br>Chris <br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Apr 21, 2014 at 6:43 PM, Xiang, Haihao <span dir="ltr"><<a href="mailto:haihao.xiang@intel.com" target="_blank">haihao.xiang@intel.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">On Mon, 2014-04-21 at 14:54 -0700, Chris Healy wrote:<br>
> I am working on a project that involves taking an OpenGL application<br>
> and streaming the rendered frames as H.264 wrapped in transport<br>
> stream/RTP to a multicast destination.<br>
<br>
</div>Weston has a similar feature which can use libva to encode the screen<br>
content (RGB format) in H.264, maybe you can refer to the code.<br>
<br>
<a href="http://lists.freedesktop.org/archives/wayland-devel/2013-August/010708.html" target="_blank">http://lists.freedesktop.org/archives/wayland-devel/2013-August/010708.html</a><br>
<div class=""><br>
><br>
> I'm working with an i7-3517UE embedded Ivy Bridge platform running an<br>
> embedded distribution of Linux and 3.13 kernel with no display. (It's<br>
> a server on a commercial aircraft that is intended to stream a moving<br>
> map to a bunch of seatback displays.)<br>
><br>
> I've been working to get this functionality implemented and have<br>
> gotten to the point where it works, but I have a feeling it is less<br>
> than optimal performance wise.<br>
><br>
> At the moment, the architecture involves a patch to Mesa to do a<br>
> glreadpixels of the frame each time glxswapbuffers is called. Also in<br>
> this patch, once the glreadpixels is completed, we are using "libyuv"<br>
> to convert the RGB frame into YUY2 as we believe this is required by<br>
> libva and placing the "libyuv" converted output in a buffer that libva<br>
> can directly use.<br>
<br>
</div>For encoding, NV12 is required. The driver will convert the buffer into<br>
NV12 if the input buffer is not NV12.<br>
<div><div class="h5"><br>
> From there on, it's pretty standard ffmpeg bits to get it on the<br>
> network.<br>
><br>
><br>
> The two areas I'm thinking there may be opportunity are the grabbing<br>
> of the buffer from the GPU using glreadpixels and the color space<br>
> conversion on the CPU.<br>
><br>
> For glreadpixels, we applied a patch to Mesa to speed up the process<br>
> of moving the data by doing one large memcpy instead of a bunch of<br>
> little ones. (Patch attached.) This resulted in a much faster<br>
> glreadpixels, but none the less a halting of GPU processing while the<br>
> memory is copied using the CPU.<br>
><br>
> For the color space conversion, libyuv does a good job of using the<br>
> SIMD instructions of the platform, but none the less, it is still<br>
> using the CPU.<br>
><br>
> Is there a better way to get the frames from GPU memory space to<br>
> libva? Maybe something involving a zero-copy. (The application being<br>
> used is a binary that I cannot change so using special gl extensions<br>
> or making any code changes to the application is not an option. Only<br>
> changes to Mesa are possible.)<br>
><br>
><br>
> Is there a better way to do the color space conversion, if it is in<br>
> fact necessary? I wonder if this is something that can be done with a<br>
> Mesa patch to have a shader do the work? Would that be faster and<br>
> consume less bus bandwidth? What about libva? I see some VPP<br>
> functionality, but the fact that it is referred to as "post"<br>
> processing makes me feel that it is intended for after decoding and<br>
> not targeted at "pre" processing before an encode. Is it possible to<br>
> do the color space conversion with the libva API?<br>
><br>
><br>
> Any recommendations would be appreciated.<br>
><br>
> Also, for what it's worth, I've posted the Mesa patches at the<br>
> following URLs:<br>
><br>
> <a href="http://pastebin.com/XQS11iW4" target="_blank">http://pastebin.com/XQS11iW4</a><br>
> <a href="http://pastebin.com/g00SHFJ1" target="_blank">http://pastebin.com/g00SHFJ1</a><br>
><br>
> Regards,<br>
><br>
> Chris<br>
</div></div>> _______________________________________________<br>
> Libva mailing list<br>
> <a href="mailto:Libva@lists.freedesktop.org">Libva@lists.freedesktop.org</a><br>
> <a href="http://lists.freedesktop.org/mailman/listinfo/libva" target="_blank">http://lists.freedesktop.org/mailman/listinfo/libva</a><br>
<br>
<br>
</blockquote></div><br></div>