[Bug 744106] Maximum latency handling broken everywhere
bugzilla at gnome.org
Sat Feb 7 15:37:33 PST 2015
GStreamer | gstreamer (core) | unspecified
--- Comment #11 from Sebastian Dröge (slomo) <slomo at coaxion.net> 2015-02-07 23:37:29 UTC ---
(In reply to comment #6)
> OK, I got it. Both are actually the same because all buffering in live
> pipelines is leaky, even if the element just blocks. And every element can only
> have a positive impact or 0 on the max latency.
That statement is actually wrong, bringing as back to square one. E.g. consider
an unbound queue followed by a leaky queue. The unbound queue would report
-1=infinity as maximum latency, while the leaky queue would bring that down to
its own limit again. So we definitely need something to distinguish blocking
and leaky queueing for the latency reporting, instead of just having a single
Now Nicolas said that blocking or leaking does not make a difference and I
disagree with that. In the above example we should all agree that an unbound
queue would increase the maximum latency to infinity. However if we afterwards
have a leaky queue that can only hold 1s, our effective maximum latency is 1s.
If we configure a latency >1s, the leaky queue will start leaking and things go
wrong. But if instead we have a non-leaky queue with a 1s limit after the
unbound queue, we still have a effective maximum latency of infinity.
So at least this (and that our queues don't report latency properly) needs to
be fixed somehow. What Olivier said in comment #2 would only be correct if we
always assume that nothing but a source has leaky buffering, but that's already
not true with our audio sinks. And we definitely need to document the expected
behaviour in more detail in the documentation.
Maybe we can just consider it the normal case that elements don't have leaky
buffering, in which case we can let the base classes and elements behave like
Olivier said. And elements that are actually having leaky buffering, will have
to override the latency query handling. In specific this would mean that
sources would always set whatever latency they have, and especially don't set
-1 as maximum latency unless they really have an unbound buffer internally. And
that filter/decoder/encoder/etc base classes do:
> if (upstream_max == -1 || own_max == -1)
> max = -1;
> max = upstream_max + own_max;
Because if either upstream has infinite buffering or the element itself, the
buffering will be infinite. And otherwise we just add our own limited amount of
(non-leaky!) buffering to the upstream value (and own_max >= own_min, always...
so many elements will set it to the same value). If an element has leaky
buffering, it would instead do something like
> max = MIN (upstream_max, own_max)
in the overridden latency query handling.
Sounds like a plan, makes sense?
For Jan's problem, I think a source can consider the query content unset and
just sets whatever it wants in there without reading anything. But there
actually is a problem further downstream if upstream does not introduce any
latency and keeps the values unset (i.e. [0,-1]). Maybe downstream elements can
assume that [0,-1] means that both are unset? And everything else means both
are set? However a live source that does not set any latency would seem quite
weird, and maybe this is really a case like Olivier mentioned in comment #9.
Configure bugmail: https://bugzilla.gnome.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.
More information about the gstreamer-bugs