[Mesa-dev] [PATCH] vl/mpeg12: implement inverse scan/quantization steps

Ilia Mirkin imirkin at alum.mit.edu
Tue Jun 25 20:29:26 PDT 2013

On Mon, Jun 24, 2013 at 2:13 PM, Christian König
<deathsimple at vodafone.de> wrote:
> Am 24.06.2013 18:39, schrieb Ilia Mirkin:
>> On Mon, Jun 24, 2013 at 4:48 AM, Christian König
>> <deathsimple at vodafone.de> wrote:
>>> Am 23.06.2013 18:59, schrieb Ilia Mirkin:
>>>> Signed-off-by: Ilia Mirkin <imirkin at alum.mit.edu>
>>>> ---
>>>> These changes make MPEG2 I-frames generate the correct macroblock data
>>>> (as
>>>> compared to mplayer via xvmc). Other MPEG2 frames are still misparsed,
>>>> and
>>>> MPEG1 I-frames have some errors (but largely match up).
>>> NAK, zscan and mismatch handling are handled in vl/vl_zscan.c.
>>> Please use/fix that one instead of adding another implementation.
>> Yes, I noticed these after Andy pointed out that my patch broke things
>> for him. Here's my situation, perhaps you can advise on how to
>> proceed:
>> NVIDIA VP2 hardware (NV84-NV96, NVA0) doesn't do bitstream parsing,
>> but it can take the macroblocks and render them. When I use my
>> implementation with xvmc, everything works fine. If I try to use vdpau
>> by using vl_mpeg12_bitstream to parse the bitstream, the data comes
>> out all wrong. It appears that decode_macroblock is called with data
>> before inverse z-scan and quantization, while mplayer pushes data to
>> xvmc after those steps. So should I basically have a bit of logic in
>> my decode_macroblock impl that says "if using mpeg12_bitstream then do
>> some more work on this data"? Or what data should decode_macroblock
>> expect to receive?
> Yes exactly, for the bitstream case decode_macroblock gets the blocks in
> original zscan order without mismatch correction or quantification.
> You can either do the missing steps on the gpu with shaders or on the cpu
> while uploading the data and use the entrypoint member on the decoder to
> distinct between the different usecases.

That sounds reasonable. But I'm looking at the MPEG-2 spec, and it
goes something like (7.4.5):

if ( macroblock_intra ) {
  F’’[v][u] = ( QF[v][u] * W[w][v][u] * quantiser_scale * 2 ) / 32;
} else {
  F’’[v][u] = ( ( ( QF[v][u] * 2 ) + Sign(QF[v][u]) ) * W[w][v][u] *
quantiser_scale ) / 32;

However the current vl_mpeg12_bitstream::decode_dct code seems to be
returning roughly QF[v][u] * quantiser_scale in mb->blocks. But in the
non-intra case it seems like we need the original value of QF[v][u] in
order to perform that operation. (I tried looking at vl_zscan, but I
don't really understand how shaders work.) Am I misunderstanding?
Should the quantiser_scale be passed in via the macroblock structure,
and the multiplication left out of the bitstream?


More information about the mesa-dev mailing list