[Bug 738302] x264enc stalls pipeline when tune=zerolatency

GStreamer (bugzilla.gnome.org) bugzilla at gnome.org
Sat Oct 11 07:56:31 PDT 2014


https://bugzilla.gnome.org/show_bug.cgi?id=738302
  GStreamer | gst-plugins-ugly | git

--- Comment #4 from Nicolas Dufresne (stormer) <nicolas.dufresne at collabora.co.uk> 2014-10-11 14:56:29 UTC ---
Thanks. Ok, here's the situation, in zerolatency,
x264_encoder_maximum_delayed_frames() returns 0, hence we set the pool size to
1. Upstream does not add more to it, hence we only have 1 buffer in the pool,
and max is set to 1 too, so the pool will not grow.

That's looks ridiculously too small, but considering there is no other thread,
by the time push() returns that single buffer should have been released back to
the pool. Otherwise some element is lying about it's latency (or forgot that
latency have impact on the allocation).

It would seem that videorate is introducing this 1 buffer latency. This might
be needed for observation, I don't know much about this element (yet).

On the other hand, accepting to have such a low buffer pool size might
represent an overhead in presence of queues, rather then improving the
performance. That seems like a different subject though.y

So, as this is master, I'd propose a correct solution, which is to read
videorate code, and confirm if it has indeed one buffer latency, if so,
implement the both the latency query (if upstream rate is known), and increase
required pool size. Note that videorate seems like a good candidate for
handling dynamic input rate and gently increasing pyipeline latency.

-- 
Configure bugmail: https://bugzilla.gnome.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.


More information about the gstreamer-bugs mailing list