[Libva] [RFC] Towards generic codec support

Bian, Jonathan jonathan.bian at intel.com
Mon Sep 28 16:50:35 PDT 2009


Hi Gwenole,

This sounds like a good idea. One potential problem with the profile selection is that sometimes the "highest" profile may not be a superset of the profiles below it.  For example, H.264 Simple Profile has features that are not in the High profile. But I guess that each driver (implementation) will determine what the default profile should be if no profile is specified, so the app is ultimately responsible for specifying the correct profile. 

Regards,
Jonathan
-----Original Message-----
From: libva-bounces at lists.freedesktop.org [mailto:libva-bounces at lists.freedesktop.org] On Behalf Of Gwenole Beauchesne
Sent: Thursday, September 24, 2009 2:47 AM
To: libva at lists.freedesktop.org
Subject: [Libva] [RFC] Towards generic codec support

Hi,

Some people expressed concerns about the API that may not support other 
codecs easily. They suggested to use strings to represent profiles and 
possibly entry-points information.

I don't fully agree with this approach because a codec is not reduced to 
its name and I don't really want to use strings. However, here is what I 
thought about some time ago.

We can keep the actual API functions. However, instead of manipulating 
VAProfiles we could manipulate VACodecs. A VACodec would be a single 
typedef uint32_t VACodec; i.e. a FOURCC value.

Now, how to represent main, high, advanced profiles? vaCreateConfig() 
could use the attribs[] interface for that effect, as a hint. This is 
because people generally used the highest-level profile available for a 
specific codec anyway. And, the profile spec is a hint from the bistream, 
but they sometimes lie.

So, if no specific profile level is specified, the highest one available 
to the HW decoder is used. If one is specified, it's used a hint and a 
profile >= to the specified one shall be found, or an error generated.

For compatibility, we can mandate that any VACodec is valid for n >= 
0x00000100. That is, anything < 256 is assumed to be an old-style 
VAProfile.

With this approach, existing code can still work as is and new code (and 
drivers) for other codecs can be made to work. e.g. if I wanted some 
dirac acceleration (through GPU shaders), they could provide a 
<va/va_dirac.h>.

Now there can be another issue. I can have dedicated HW for say MPEG-2, 
H.264 (that is with a dedicated ASIC: UVD, PureVideo, VXD370, whatever), 
and still want to have GPU-assisted acceleration (with shaders) for other 
codecs. How dynamic loading of modules work?

This could be made transparent, at first. But 3rdparty codecs would have 
to implement the whole VADriverVTable. Not a real problem either since the 
whole decode pipeline would be specific anyway. i.e. we would need 
specific VA surfaces, VA buffers, etc.

My initial idea for this: vaInitialize() looks for all possible drivers 
supported. i.e. VIDEO_drv_video.so and CODEC_drv_codec.so files. Where 
VIDEO = { psb, iegd, fglrx, ... } and extra CODEC = { theora, dirac, vp6, 
... }. FOURCCs are then cached and the right VTable hooks called depending 
on the VA config.

WDYT?

Regards,
Gwenole.
_______________________________________________
Libva mailing list
Libva at lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/libva


More information about the Libva mailing list