[Mesa-dev] Pipe-video for HW decoders (Was: [PATCH 2/2] gallium: don't use enum bitfields in p_video_state.h)
Younes Manton
younes.m at gmail.com
Thu Jul 14 11:13:13 PDT 2011
On Thu, Jul 14, 2011 at 1:19 PM, Christian König
<deathsimple at vodafone.de> wrote:
> Yeah, I also thought about this. My overall feeling was to get it into
> VRAM first and then bring it into the form needed by the hardware with a
> shader if the need arise.
That's pretty much impossible since you can't use a shader to generate
a command buffer to feed back into a hardware decoder. On Nvidia
hardware you have to generate a stream of commands and block data
mixed together to actually get macroblocks decoded. Either way, the
interface should not expose how and where it puts the incoming data
and how much it accepts; everything that's currently done now can be
done *behind* the interface for shader-based decoding without any
difficulty.
> I slice level buffering doesn't makes any sense to me. It was one of the
> big mistakes of XvMC, and I don't think we should repeat that. Decoding
> single slices makes only sense if your under real memory pressure and
> none of the modern interfaces still supports that.
The state tracker does not need to know anything about how much data
the driver is buffering. Whether or not you're buffering a slice or an
entire frame should not be relevant to the state tracker, it should
just feed you data and tell you when it wants to flush things. If you
want to respect it's flushes you can, otherwise you can ignore them
and do something better as long as you insure correctness. The old
interface allowed for buffering an entire frame also, however it did
so without exposing the details to the state tracker and it was easy
for a hardware based decoder to partition the incoming data as needed.
I'm not suggesting we decode slice at a time for the shader-based
decoder, I'm saying that the interface cannot assume how the driver
wants to buffer the incoming data.
> Anyway that interface between state tracker and driver is only used for
> anything except bitstream acceleration, and from what I know about UVD
> that doesn't really support anything else.
The majority of Nvidia cards only support the IDCT entrypoint for
MPEG2. The bitstream entrypoint actually makes more sense in this
respect because it doesn't assume anything about how much data is
coming in.
Anyhow, this doesn't matter to anyone else but me at the moment, but I
thought it good to mention well in advance now that pipe-video is in
master.
More information about the mesa-dev
mailing list