[VDPAU] ffmpeg and HEVC: Partial success, need help understanding problem

Stephen Warren swarren at nvidia.com
Tue May 26 08:29:39 PDT 2015


Jose Soltren wrote at Tuesday, May 26, 2015 8:52 AM:
> Hi Rémi - please see inline below.
> 
> On 2015/05/26, 9:24 , "Rémi Denis-Courmont" <remi at remlab.net> wrote:
> 
> >Le 2015-05-26 17:19, Jose Soltren a écrit :
> >> Inline below.
> >>
> >> On 2015/05/23, 22:49 , "Philip Langdale" <philipl at overt.org> wrote:
> >>
> >>>On Sat, 23 May 2015 12:52:08 -0700
> >>>Philip Langdale <philipl at overt.org> wrote:
> >>>
> >>>> Ok, that's not right. I'd forgotten what the original sample looked
> >>>> like. What I think it really looks like is that it's rendering
> >>>> twice
> >>>> as tall as it should, alternating black lines with the actual
> >>>> picture
> >>>> lines, and then that's wrapping around.
> >>>
> >>>Really, what it ends up looking like is what you'd see if someone
> >>> took
> >>>a progressive frame, and then said it was really an interlaced frame
> >>>with the top half as one field and the bottom half as another. I can
> >>>provide samples if it helps.
> >>
> >> Yep, that's exactly what is happening. Unfortunately, the answer is:
> >> you
> >> need to lie to VDPAU about what is happening here. Use
> >> VDP_VIDEO_MIXER_PICTURE_STRUCTURE_TOP_FIELD as the
> >> current_picture_structure parameter to VdpVideoMixerRender().
> >
> >I must be missing something obvious here. Isn't non-interlaced content
> >the whole point of VDP_VIDEO_MIXER_PICTURE_STRUCTURE_FRAME?
> 
> Yes.
> 
> However: every single video codec prior to H.265/HEVC actually output a
> progressive frame as two independent fields! The reason for this is that
> all prior codecs were defined with television in mind. Progressive
> pictures are typically two half-pictures that need to be blended together
> and presented at the same time.
> 
> H.265/HEVC is the first codec to define, at the Specification level, that
> only progressive frames must exist. (There are provisions for interlaced
> content but they are left to addenda or extensions, not part of the
> Specification.)
> 
> So, at least the NVIDIA implementation of a video mixer always assumes
> that a VDP_VIDEO_MIXER_PICTURE_STRUCTURE_FRAME has two fields that need to
> be interleaved. This leads to the issue that Philip correctly described.
> 
> I agree that the hack is inelegant and I welcome suggestions to improve
> this. The proposal on the table is a new enumeration value, perhaps:
> 
> VDP_VIDEO_MIXER_PICTURE_STRUCTURE_PROGRESSIVE

This sounds like a bug in our implementation. IIRC, the fact that our
implementation internally stores YUV surfaces as separate top/bottom fields
isn't something that's intended to be visible to the user of VDPAU[1]. If
we store YUV surfaces derived from H.265 differently, we should handle this
entirely transparently inside the implementation. A flag on the YUV surface
object to transfer data between decoder and video mixer might work. We
shouldn't require users of the video mixer to know our implementation details
and pass incorrect values to the mixer to work around our issues.

[1] Well, the interop APIs explicitly expose YUV surfaces as separate
fields. How does interop work for H.265? Perhaps there's a related hole
there too.

--
nvpublic


More information about the VDPAU mailing list