[Mesa-dev] Status of VDPAU and XvMC state-trackers (was Re: Build error on current xvmc-r600 pipe-video)
Younes Manton
younes.m at gmail.com
Fri Jun 17 12:31:28 PDT 2011
On Thu, Jun 16, 2011 at 10:14 AM, Jose Fonseca <jfonseca at vmware.com> wrote:
>
> Younes,
>
> If the thread you quote had been last word on the subject, then then this thread would have never happened:
>
> http://www.mail-archive.com/mesa3d-dev@lists.sourceforge.net/msg09601.html
>
> and this conversation would not be happening neither.
>
> It's not my intention to make you go around in circles -- that's the result of lack of consensus. I merely suggested the assimilation of pipe_video_context by the state tracker as an easy way out, but I'm fine If you prefer to get HW proof of concept.
>
> The fact is that there is no pipe_2d_context to date, in any form, and probably never will, so we can't rely on that as reference to what to do with pipe_video_context.
>
> The more I think about it, the more I think that having totally different kind of actors (pipe_xxx_context) operating on shared resources (like textures) is not sustainable, even if these pipe_xxx_contexts ultimately share the same winsys, I still think that having the video interface to be an extension or inclusive part of pipe_context is preferable, as it is cleaner/leaner.
>
>
> I've also looked the way DXVA API/DDI operates, and I think that if we change the interface, such that pipe_screen::video_context_create that takes a pipe context argument as:
>
> struct pipe_screen {
> ...
>
> struct pipe_video_context *
> (*video_context_create)( struct pipe_screen *,
> struct pipe_context *);
> ...
> };
>
>
> and enforce this pairing of pipe_video_context <-> pipe_context
>
> struct pipe_video_context {
> struct pipe_context *pipe;
>
> ...
> };
>
> then this lets us get the best of all worlds:
>
> - no need of duplicating pipe_context methods in pipe_video_context -- simply use the pipe_context methods from the associated pipe_video_context->pipe
>
> - ties nicely with the pipe_context based implementation of pipe_video_context in gallium/auxiliary
>
> - ties nicely with the way DXVA API/DDI works
>
>
> Jose
>
I'm not suggesting that there shouldn't be discussion, just that the
state tracker approach was where we started more than 2 years ago and
going back has no technical benefits.
I don't see why it should be set in stone that to do HW decoding you
also need a 3D context. Practically speaking, yes, most hardware will
actually use the 3D pipeline to do the color conversion and scaling,
but in such a case one can create a pipe_context in their
pipe_video_context. (This is actually what the nvfx HW decoder did.)
However, if I present you with hardware where the 3D and decoding HW
have separate command buffers and state and the decoding HW can
independently get the final image to a DRI2 buffer, it makes no sense
to force the state tracker to create to create a pipe_context and all
the associated 3D machinery when it will have no use.
I realize the DDI takes a different approach and throws everything
under the sun into the device interface (2D, shader 3D, ff 3D, video
dec, cryptography, modeset... I can only imagine that decent drivers
there instantiate things lazily such that if the runtime never calls
say, the 3D functionality at all and they're not needed by whichever
functions are being called, then no resources associated with that
block are ever initialized). If that's the approach we want to take
with pipe_context that's fine, but in my mind I thought people would
want to keep separate interfaces separate and not clutter pipe_context
with everything.
So a question that remains is how much overlap is there between
pipe_context and pipe_video_context? As far as duplicate or nearly
duplicate functionality goes I see:
::create_sampler_view() - This shouldn't be in the interface. Video
decoding shouldn't have to care about "samplers" per se, it's an
implementation detail specific to 3D decoding. From what I can see
Christian changed it for performance reasons; originally the interface
dealt with textures, not samplers, but for 3D decoding we had to keep
samplers handy somewhere so I had a hash map that kept track of this.
Probably a better interface is to subclass textures for 3D decoding
and hang samplers off them, but I don't recall if I considered and
rejected this for any particular reason.
::create_surface() - This used to be part of pipe_screen. I don't
recall the motivation for making it part of the context, but every
driver does the same thing, malloc a driver-specific subclass of
pipe_surface and fill it. I would argue that since video decoding
deals with planar/mixed surfaces, anything special that has to be done
for them doesn't need to clutter up pipe_context, which will likely
never have to deal with these types of surfaces. But again, I'm not
sure why this is context-specific and why it shouldn't be elsewhere.
::is_format_supported() and ::get_param() - Not in pipe_context but in
pipe_screen. I already explained why I thought these should be here.
The names might be the same, but the types of formats being queried
and the caps being returned won't overlap with what pipe_screen is
doing, both because decoding hardware is concerned with a larger set
of formats and has unique caps and because potentially those formats
and caps are context-dependent.
Anyhow, I don't have a problem with following what the DDI does, even
though I think separating concerns and encapsulation is better. I also
don't see why this thing has to be perfect from the get-go,
considering how many times the gallium interface has changed over the
years. With the current structure, changing any part of the video
interface will have very little impact on anything else. Anyhow, we
seem to have a relatively few POVs here. Thanks Jose and Roland for
taking the time to look things over and provide feedback.
More information about the mesa-dev
mailing list