[Libva] vaGetImage blocking

Ulrich von Zadow uzadow at libavg.de
Mon Jul 29 14:48:05 PDT 2013


Hi,

On Jul 29, 2013, at 1:53 , Timo Rothenpieler <timo at rothenpieler.org> wrote:
>> Hi,
>> 
>> I'm working on VAAPI support for libavg (www.libavg.de). Everything has gone more-or-less smoothly and we have a working implementation that's been tested using an NVidia GPU desktop and an AMD E350 Mini-PC, both with Ubuntu 12.04 and stock drivers. Currently, we're using vaGetImage() and vaMapBuffer() to retrieve the data and then upload it again as an OpenGL texture (yes, I know this is the slowest way of doing things). 
>> 
>> However, on the E350, vaGetImage() blocks and seems to deliver images at the video framerate (i.e. 25 fps in many cases). Is there a way to change this behaviour or at least to determine whether an image is ready? We'd like our compositor to run at full speed even if the video is updating more slowly. The code in question is here: https://www.libavg.de/site/projects/libavg/repository/entry/branches/libavg_vaapi/src/video/VAAPISurface.cpp, lines 70ff.
>> 
>> Thanks for any help!
>> 
> 
> You could use vaDeriveImage, which is much faster than vaGetImage.
> Or you could use the glx api of vaapi, which gives you an OpenGL texture
> directly.

I implemented vaDeriveImage support today. Unfortunately, the check for vaDeriveImage() driver support returns false on NVidia (Ubuntu 12.04 and 13.04) and AMD (Ubuntu 12.04). Rendering directly to texture would require some refactoring in libavg which I'd like to postpone (it's not guaranteed to be supported either, right?). So the question stands: Is it possible to get rid of the blocking behaviour of vaGetImage() in some way? Can I determine whether the image is ready (then I can avoid calling vaGetImage() if it'll block)? 

Cheers,

  Uli

--
Any technology distinguishable from magic is insufficiently advanced.

Ulrich von Zadow | +49-172-7872715
Jabber: coder at c-base.org
Skype: uzadow





More information about the Libva mailing list