[Libva] libva vs intel media SDK
joebloggsian at gmail.com
Mon Feb 27 14:07:18 PST 2012
Whilst evaluating intel processors for an embedded application I did some
comparisons of H.264 video encode / transcode using libVA versus Intel
Media SDK. I used the same i3/HD3000 sandy bridge machine dual booted to
win7 & ubuntu. I built the latest kernel on ubuntu and tried both
va-api/intel driver ext and master. On Windows I used the multi transcode
sample app and modified it to transcode 8 parallel 1080p30 H.264 streams
and used HW encoding (speed mode). For va-api I modifed the encode sample
to encode 8 parallel streams I also preloaded and converted to NV12 etc all
the video frames to take that out of the equation. Both encoded to the same
bitrate, etc. I verified in each case (using intel-gpu-top / intel graphics
performance analyzer) that the GPU was being used and 100% busy in each
Speed : Media SDK was nearly 2x faster than libVA.
Quality: Analysing the streams Media SDK is vastly better -the encode uses
all I4 and I16 modes, P-partitions down to 4x4, skip and long motion
vectors versus libVA which seems to use just I4x4 in I-frames and just P16
mode in P-frames. The quality difference in the encode is night and day.
To bring a long post to and end, my question is why the huge difference
between Windows and Linux for what is essentially a hardware encode? Am I
doing something wrong? I see that there is a lot of activity improving
vaapi/intel driver from intel engineers - who presumably have access to the
Intel Media SDK source/developers. Is there an expectation or roadmap
to close the gap in the near future? Of particular interest is the low
quality of the va-api encode which makes it unuseful for many applications.
I'd be interested in getting involved improving the libva driver but the
intel GPU PRMs seem to contain detailed information on everything *except*
the video encode/decode HW.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Libva