[VDPAU] ffmpeg and HEVC: NumLongTermPicturesSlcieHeaderBits

José Hiram Soltren jsoltren at nvidia.com
Wed May 27 14:15:47 PDT 2015


Hello phil,

On 05/27/2015 04:02 PM, Philip Langdale wrote:
> On 2015-05-27 13:54, José Hiram Soltren wrote:
>> Hello phil,
> 
>>> Hmm. I may have run the trace with de-int turned on in mplayer, but I also
>>> ran it
>>> with de-int turned off and it didn't help. Maybe mplayer sets the surfaces
>>> regardless,
>>> but I thought it didn't. I'll double check.
>>
>> Possibly. I don't have an in depth understanding of mplayer's surface
>> management code. The calls to VdpVideoMixerRender you provided are different
>> from mine as noted above.
> 
> I double-checked, mplayer always passes the extra surfaces, even if de-init is
> turned
> off. I'll forcibly stop it from doing this and try again tonight.
> 
>>> Are you guys going to change how the ouput surface in constructed? I know we're
>>> discussing
>>> presentation hacks here, but even get_bits on the output surface returns the
>>> interlaced
>>> form - I tested the basic ffmpeg mode where it doesn't do any presentation at
>>> all and
>>> just reads the surface back.
>>
>> Possibly. I'll communicate any such plans on this list. The hardware decodes a
>> single progressive frame and places it into a single location in memory. This
>> is what VdpDecoderRender does. VdpVideoMixerRender would be the process
>> responsible for any interlacing you would see.
> 
> So, the direct path in ffmpeg doesn't use a mixer at all. It instantiates a
> decoder
> and surfaces and then calls getBits on the output surface. I did a quick test
> telling ffmpeg to output pngs of frames and they are clearly interlaced. If I
> understood
> your description of the current decoder behaviour, this makes sense.

I'm not quite sure I follow. Could you attach another trace of this situation?

> You said
> that it is
> not laying out the YUV frame the way a 'normal' progressive frame would be, so
> isn't that
> why ffmpeg would show these results, independent of the mixer?

Not exactly.

Typically, NVIDIA's hardware implementation accepts four pointers for output
surfaces: luma top, luma bottom, chroma top, chroma bottom. This corresponds to
the luma and chroma components of the top and bottom field of a picture.

The H.265/HEVC decoder only accepts a luma and a chroma pointer since all
pictures are progressive.

It is possible that an incorrect VdpYCbCr value is being passed to
VdpVideoSurfaceGetBitsYCbCr.

> I guess there's probably an output surface format type that reflects what
> you're doing,
> but the vdpau API doesn't encompass that so it's marked as yuv420p which isn't
> actually
> accurate?

This is a possibility. I need to look more closely into this situation to know
for sure.

Thanks,
--José

> --phil


More information about the VDPAU mailing list