[Bug 105145] vaExportSurfaceHandle interaction with surface interlaced flag prevents switching on vaapi deinterlacing dynamically
bugzilla-daemon at freedesktop.org
bugzilla-daemon at freedesktop.org
Mon Jun 11 12:58:15 UTC 2018
https://bugs.freedesktop.org/show_bug.cgi?id=105145
--- Comment #11 from k.philipp at gmail.com ---
Hi again,
> The PR got merged.
I do have a patch ready (it's just a few lines anyway), but vaapi has not seen
a release yet so it is unclear which API version it should depend on. Also we
have encountered even more corner cases that prompts me to now follow up on a
comment to the PR,
https://github.com/intel/libva/pull/196#issuecomment-371769757 - see below.
In this bug report, I described the approach (or workaround, rather) we took in
Kodi to get useful vaapi-accelerated playback working on most videos. In the
meantime, we have encountered more edge cases. Namely:
1. The described approach will not work for 1080p HEVC and VP9 videos, since
the AMD decoder does not support the interlaced (field) format for HEVC and VP9
(see si_get_video_param) and will re-allocate the decode surfaces to
progressive format when decoding the first picture. Then they cannot be used as
post-processing input any more. This could be fixed by setting the export usage
hint on the post-processing output surface, since then both input and output to
post-processing are progressive.
2. We encountered a DVB H.264 PAFF mixed progressive/interlaced video with
resolution 1920x1088, which failed the size check for always inserting
post-processing by a few pixels. This can be fixed by making the check more
lenient.
Now as you see these issues are not unfixable (or un-workaroundable), but you
can probably see that in the long run these kind of edge cases will continue to
crop up and cause problems. Our alternative workaround would be to always
re-initialize the whole decode pipeline when encountering a change from
interlaced to progressive frames or vice-versa, losing a bunch of already
decoded frames in the pipeline in the process.
As this is also far from optimal and Christian König seemed positively inclined
to switch to progressive by default, I want to investigate this route after
all. The only missing piece for this to work seems to be de-weaving the frames
when switching to the interlaced format, and I can have a look at that (can't
promise a time frame though).
However, before investing time into that I want to ask if it would be also
possible to go one step even further: Is the interlaced format necessary in the
first place? With DVB (or other, but this seems the most common) PAFF streams,
you will have to reallocate surfaces and weave/de-weave very often, possibly
after few frames. Wouldn't it be more efficient then to ditch the interlaced
format and have post-processing accept progressive-format frames for
deinterlacing, like intel-vaapi-driver seems to do?
--
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20180611/2fd52cc3/attachment-0001.html>
More information about the dri-devel
mailing list