[Libva] How to detect the type of memory returned...

Jean-Yves Avenard jyavenard at gmail.com
Tue Jun 17 04:10:40 PDT 2014


On 17 June 2014 19:51, Gwenole Beauchesne <gb.devel at gmail.com> wrote:

> Note: I am not in the business of doing things "good enough", I want
> 100% correctness. :)

that's good to know !

>
> So, for same-size transfers (uploads / downloads) with the same
> sub-sampling requirements (i.e. YUV 4:2:0), there shall be a way to
> produce and guarantee that what we get on the other side exactly
> matches the source. Otherwise, you are risking propagation of errors
> for subsequent frames (postprocessing in terms of downloads, quality
> of encoding in terms of uploads), thus reducing the overall quality.

Well, that whole business of vaGetImage/vaDeriveImage in VLC is only
used to display the frame. The decoded frame isn't used as reference
frame etc obviously (that's all done within libva)

In VLC, using vaGetImage + YV12->YV12 I have a 11% CPU usage to play a
1080/24p h264 video
With vaDeriveImage + NV12->YV12 that jumps to 17%

For MythTV, I only use that method for PiP. For the main playback
routine, we use OpenGL and use vaCopySurfaceGLX to directly draw
within the OpenGL surface
For PiP obviously, I don't really care how accurate the conversion
from NV12->YV12 is...

Main playback uses 13% GPU, each 576/50i mpeg2 PiP add 10% CPU


>
> The next fix to get in is also a patch that disables AVS for same-size
> / same-subsampling transfers on the Intel HD Graphics driver side.

look forward to that...


More information about the Libva mailing list