[Libva] VPP Deinterlacing problem
Zhao, Halley
halley.zhao at intel.com
Thu May 15 19:01:59 PDT 2014
> Question: Does Motion Adaption implementation (unlike Bob) require two distinct source surfaces plus an ADDITIONAL surface for the result?
Yes. A new surface is required for the result surface.
> -----Original Message-----
> From: Libva [mailto:libva-bounces at lists.freedesktop.org] On Behalf Of
> Steven Toth
> Sent: Wednesday, May 14, 2014 11:57 PM
> To: libva at lists.freedesktop.org
> Subject: [Libva] VPP Deinterlacing problem
>
> Hi,
>
> I'm looking for some help/guidance on implementing Motion Adaptive
> deinterlacing.
>
> Background: I'm working on a libva project that receives raw video
> frames (720x480i) from a video4linux SD video capture device, largely
> based around the test example projects. Intel IvyBridge platform, i965
> driver. Testing / developing with 1.3.0 va and va-intel-driver on
> Ubuntu 12.04. The solution works very well, no major issues, I can
> encode for long periods of time, very stable code.
>
> However, I was asked to add vpp deinterlacing support. Bob deinterlace
> works fine, 30fps encoding is reliable. Motion Adaption isn't working
> properly, or - my solution isn't working properly, the encoded output
> looks like 15fps with duplicate frames.
>
> Running various h264 analysis tools on the Motion Adaption clip,
> stepping through the frames, the 30 fps is actually correct, but every
> other frame is a repeat of the previous frame with some very minor
> (very very minor) pixel variation. As a result, visually its playing at
> 30fps but looks like 15 with odd pixel shimmering effects.
>
> It feels like I've miss-understood the Motion Adaption VPP
> configuration, so I'm hoping someone can point me in the right
> direction.
>
> My workflow is:
> 1. I upload the incoming raw video frame into the current source
> surface.
> 2. I perform vpp de-interlacing on the current source surface and pass
> the prior source surface as single and only forward reference via the
> pipeline_param struct. I call vaRenderPicture on the pipeline.
> 3. I then compress the current surface.
> 4. Handle the codec buffer of compressed data.
> 5. Repeat for each incoming raw video frame.
>
> Question: Am I correct in think I should pass the previous source
> surface as the single and only forward reference?
> (num_forward_references is returned as 1, num_backwards_refs is 0).
>
> Corrective fixes I've tried and failed with:
>
> A. If I pass the current surface as the forward reference, no
> deinterlacing takes place and the resulting video has 30fps smooth
> playback, I assume the motion adaptive algorithm is trying to adapt
> exactly the same frame (or rejects the same surface silently) and the
> output video is smooth but interlaced.
>
> B. If I pass the prior source surface -1 (IE current source surface -
> 2), or use any other value for the forward reference, I start to get
> very odd temporal video effects. I assume the Motion Adaption algorithm
> is trying to blend two very different non-adjacent temporal frames and
> the output is clearly bad.
>
> I'm only using an output_region in the pileline_params, not an input
> region, as shown below.
>
> Question: Does Motion Adaption implementation (unlike Bob) require two
> distinct source surfaces plus an ADDITIONAL surface for the result?
> Perhaps I have my pipeline tuned for Bob (based on GST samples) but its
> not reliable for MA?
>
> For completeness, the pipeline func is here:
>
> static int func_deinterlace(VASurfaceID surface, unsigned int w,
> unsigned int h, VASurfaceID forward_reference) {
> VAStatus va_status;
>
> vaBeginPicture(va_dpy, vpp_context, surface);
>
> va_status = vaMapBuffer(va_dpy, vpp_pipeline_buf, (void
> *)&vpp_pipeline_param);
>
> CHECK_VASTATUS(va_status, "vaMapBuffer");
> vpp_pipeline_param->surface = surface;
> vpp_pipeline_param->surface_region = NULL;
> vpp_pipeline_param->output_region = &vpp_output_region;
> vpp_pipeline_param->output_background_color = 0;
> vpp_pipeline_param->filter_flags = VA_FILTER_SCALING_HQ;
> vpp_pipeline_param->filters = vpp_filter_bufs;
> vpp_pipeline_param->num_filters = vpp_num_filter_bufs;
> va_status = vaUnmapBuffer(va_dpy, vpp_pipeline_buf);
> CHECK_VASTATUS(va_status, "vaUnmapBuffer");
>
> // Update reference frames for deinterlacing, if necessary
> vpp_forward_references[0] = forward_reference;
> vpp_pipeline_param->forward_references =
> vpp_forward_references;
> vpp_pipeline_param->num_forward_references =
> vpp_num_forward_references;
> vpp_pipeline_param->backward_references =
> vpp_backward_references;
> vpp_pipeline_param->num_backward_references =
> vpp_num_backward_references;
>
> // Apply filters
> va_status = vaRenderPicture(va_dpy, vpp_context,
> &vpp_pipeline_buf, 1);
> CHECK_VASTATUS(va_status, "vaRenderPicture");
>
> vaEndPicture(va_dpy, vpp_context);
> return 0;
>
> }
>
> Perhaps my overall work simply isn't compatible with MA, IE I should be
> loading the 'current incoming pixels' into the 'next source surface'
> then passing this next source surface as a forward reference to the
> current source?
>
> Feedback welcome.
>
> - Steve
>
> --
> Steven Toth - Kernel Labs
> http://www.kernellabs.com
> _______________________________________________
> Libva mailing list
> Libva at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/libva
More information about the Libva
mailing list