[Mesa-dev] Status of VDPAU and XvMC state-trackers (was Re: Build error on current xvmc-r600 pipe-video)

Christian König deathsimple at vodafone.de
Sun Jun 5 04:34:24 PDT 2011


Am Samstag, den 04.06.2011, 18:28 -0700 schrieb Jose Fonseca:
> I think we need to have a proper review round of the gallium interfaces, so that we have an interface
> everybody feels that we can support going forward, which did not happen last round.
I agree to that, the interface was at least partly developed by looking
at what a mpeg2 decoder needs, and not by using a more general concept.

> That said, I don't think much attention has been given to this branch outside from those working on it.
> So those with constructive feedback should say now, or "forever hold your peace".
> Because one way or the other, it doesn't make sense to have useful code on a branch.
Some things look quite good and well defined, while others obviously
needs some more love from a designer point of view. 

> Attached is the diff between pipe-video and master for src/gallium/include/*
> 
> I need to look more closely at this, but I can't help thinking that the new interfaces are quite different from the rest of gallium's 3d interfaces.
> Instead of being an extension to gallium 3D interfaces/objects, pipe-video seems more like a completely parallel set of interfaces/objects.
> 
> - AFACIT all drivers implement pipe_video context using vl_create_context(pipe_context *). 
> If so then it makes no sense for this to be a gallium interface. It should all be state tracker code.
Yes that's true, but as Younes already mentioned that design was on
purpose. The whole idea behind it is to give the driver control over
using either the shader/cpu based solution for a given video decoding
stage, or implement their own stuff by using some sort of hardware
acceleration.

The shader based stages are then relying on the "normal" gallium 3D
objects to do their work, and have itself a clearly defined interface.
So it should be possible to:

a) Let a driver decide how to implement a specific codec, for example
you can use UVD for bitstream decoding, while still using shaders to do
iDCT and MC.

b) Reuse a specific stage to implement other codecs, iDCT for example is
a well defined mathematical transformation and used in a couple of
different codecs.

This obviously could also need a bit of improvement, the stage
interfaces for mc are for example still not free of mpeg2 specific
stuff. 

> At very least there are ovious things that need to be fixed:
> 
> - get_param / is_format_supported should not be duplicated from screen.
I was also considering that when I started with coding, but right now I
think renaming get_param into get_video_param and using an separate set
of caps instead is the better way to go.

For is_format_supported: I Think this should get the format/codec as
parameter, instead of the format/usage used in the screen object. I've
put this on my todo list.

> - #if 0 ... #endif code in the interface headers
Yes, I know, that's already on the todo list.

> I'd also would like to see how generic these interfaces are (e.g. could we use this to implement DXVA DDI).
Damm, that's a good point. I only looked at vdpau/vaapi/xvmc while
changing the interface design away from xvmc, but taking a look at DXVA
could also be of value.

Thanks for the feedback,
Christian.



More information about the mesa-dev mailing list