[Libva] [RFC] New postprocessing flags to vaPutSurface()

Bian, Jonathan jonathan.bian at intel.com
Thu Apr 15 22:03:26 PDT 2010


Hi Gwenole,

Please see my comments below.

Regards,
Jonathan

>-----Original Message-----
>From: Gwenole Beauchesne [mailto:gbeauchesne at splitted-desktop.com]
>Sent: Thursday, April 15, 2010 2:48 AM
>To: Bian, Jonathan
>Cc: Libva at lists.freedesktop.org
>Subject: RE: [Libva] [RFC] New postprocessing flags to vaPutSurface()
>
>Hi Jonathan,
>
>Sorry, I had missed this mail.
>
>On Wed, 7 Apr 2010, Bian, Jonathan wrote:
>
>> The new post-processing flags you proposed look fine to me. As far as
>> the naming, I don't have a strong opinion as long as it conveys the
>> different levels of trade-off. Perhaps we can use something like:
>>
>> VA_FILTER_LQ_SCALING -> VA_FILTER_SCALING_FAST
>> VA_FILTER_MQ_SCALING -> VA_FILTER_SCALING_DEFAULT
>> VA_FILTER_HQ_SCALING -> VA_FILTER_SCALING_HQ
>
>Agreed, this looks better. Thanks.
>
>> I have been thinking a little bit about how to support more advanced
>> video post-processing capabilities with the API. As these advanced
>> features will likely require passing more complex data structures than
>> just flags or integer values, one possible solution is to use
>> vaBeginPicture/vaRenderPicture/vaEndPicture for passing video
>> post-processing data structures as buffers.  For example, we can add a
>> new VAVideoProcessingBufferType and a generalized
>> VAVideoProcessingParameter data structure. This would make it easier to
>> specify things like reference frames for doing motion-compensated
>> de-interlacing etc.  This should work for pre-processing as well if the
>> source picture to be encoded needs some pre-processing, and as pre and
>> post processing share a lot of common features they can be treated
>> essentially the same.
>
>The idea is appealing. At first sight, I thought there could be a problem
>if some postprocessing algorithms need to operate on up-scaled surfaces.
>On second thought, I don't know of any. So, your
>VAVideoProcessingBufferType looks interesting.
>
>However, I see the vaBeginPicture() .. vaEndPicture() functions as used
>for the decoding process and vaPutSurface() as the display process. I
>mean, those steps could be completely separated, even from a (helper)
>library point of view.
>
>I mean, would this model work in the following condition?
>
>* decoder library:
>- vaBeginPicture()
>- vaRenderPicture() with PicParam, SliceParam, SliceData
>- vaEndPicture()
>
>* main application:
>- vaBeginPicture()
>- vaRenderPicture() with VideoProcessing params
>- vaEndPicture()
>
>i.e. decouple decoding and postprocessing steps and making sure the second
>vaRenderPicture() in the user application won't tell the driver to decode
>the bitstream again.

The vaRenderPicture() with decode parameters (PicParam, SliceParam ...) 
would cause the hardware driver to generate a command buffer for decode, while 
the vaRenderPicture() with videoproc parameters would generate a command
buffer for postprocessing. 

>
>> vaPutSurface() could still be the most efficient way to get a decoded
>> frame to the screen if no advanced video processing is required, or if
>> the hardware can't process an image and write the output to memory (e.g.
>> hardware overlay).  But if the hardware is capable of taking an input
>> image from memory, process it and write it out to memory (whether it's
>> GPU or fixed-function), then the vaRenderPicture path can enable more
>> advanced features.
>
>The vaPutSurface() postproc flags could also be thought as postproc
>enabling flags with VAVideoProcessing structs the algorithm options, or
>some defaults are chosen if no such options are defined?
>
>So, there are three possible models here:
>
>1) VAVideoProcessing buffers controlling immediate execution of the
>postproc algorithms
>
>2) VAVideoProcessing buffers being config (e.g. denoise level) that are
>later executed if vaPutSurface() flags tell so
>
>3) VAVideoProcessing buffers controlling immediate execution of the
>postproc algorithm and vaPutSurface() flags controlling other postproc
>algorithms with specific defaults.
>
>In short, would it be desirable to keep the decoded surface as is,
>un-processed [for vaGetImage()]? I believe so, and postproc should be
>executed later at vaPutSurface() time.

3) is what I had in mind. i.e. vaRenderPicture with videoproc parameters 
is for video post-processing with the decoded surface (and other reference
surfaces for things like de-interlacing or FRC) as input and processed 
surface as output (render target). So the decoded surface is not altered. 
As part of this operation, we could even specify a GL texture to be the 
destination of the output. vaPutSurface() is kept for direct rendering to 
the window system targets (a X window or pixmap), but vaRenderPicture
would be the window system independent method with more flexibility.   
>
>WDYT?
>
>Regards,
>Gwenole.


More information about the Libva mailing list