[Mesa-dev] nouveau hardware decoding and vaDeriveImage

Philipp Kerling pkerling at casix.org
Fri Aug 18 14:14:31 UTC 2017


Hi Julien,

thanks for providing some background on the issue. Now it makes a lot
more sense.

Am Mittwoch, den 16.08.2017, 10:40 +0100 schrieb Julien Isorce:
> Hi,
> 
> This issue is tracked here https://bugs.freedesktop.org/show_bug.cgi?
> id=98285
> This is due to a limitation in libva API which only supports 1 FD per
> VaSurface.
> It is enough for intel-driver because NV12 is only 1 bo. But in
> nouveau driver, NV12 is 2 separates bo (omitting the interlaced pb).
OK, guess we'll have to wait for libva to support this use case first.

> Note that this libva API limitation is also a problem for AMD
> hardware because NV12 is also 2 non-contiguous bo (see function
> 'si_video_buffer_create' and 'nouveau_vp3_video_buffer_create' mesa
> source).
If I got you right, this basically means vaDeriveImage with the usual
NV12 configuration won't work on AMD either?

> I do not know if this is HW requirement, or if using one contiguous
> bo with offset handling would work. So currently there is 2
> independent calls to pipe->screen->resource_create. But maybe it
> would also work if the second resource would somehow has the same
> underlying memory but with an offset.
> 
> The workaround for now is to convert it to RGB: (the following
> pipeline works on both nvidia and amd hardware)
> 
> GST_GL_PLATFORM=egl GST_GL_API=gles2 gst-launch-1.0 filesrc
> location=test.mp4 ! qtdemux ! h264parse ! vaapih264dec !
> vaapipostproc ! "video/x-raw(memory:DMABuf), format=RGBA" !
> glimagesink
> 
> In the pipeline above, vaapipostproc will receive a NV12 vaSurface as
> input and will convert it to a RGBA vaSurface. And export it as 1
> dmabuf FD. Which will be imported by glimagesink by using
> eglCreateImage. This is not a full zero-copy, but at least this is
> zero-cpu-copy, the conversion being done on the gpu. 
I see, this workaround could indeed be very useful. But if H.264
decoding still gives artifacts anyway (like Ilia said), I do wonder if
it's worth the trouble to implement this specifically for nouveau and
then end up never using it in production because the decoded video is
not presentable to the user. Probably best to wait for libva :-/

> 
> Cheers
> Julien

Best regards,
Philipp


More information about the mesa-dev mailing list