[Bug 744106] Maximum latency handling broken everywhere

GStreamer (bugzilla.gnome.org) bugzilla at gnome.org
Sat Feb 7 09:35:38 PST 2015


https://bugzilla.gnome.org/show_bug.cgi?id=744106
  GStreamer | gstreamer (core) | unspecified

--- Comment #3 from Sebastian Dröge (slomo) <slomo at coaxion.net> 2015-02-07 17:07:26 UTC ---
I think our main disagreement here in your example would be what the videocoder
does with the latency query.

We agree that the source reports [0,0], right? And the decoder has minimum
latency X, the *maximum* time it will possibly delay a frame due to reordering
(not the minimum time, because we take this value to tell the sink to at least
wait that much... and if it was the minimum, almost all frames will be late).

Now the disagreement is about the maximum latency. Does the decoder have a
maximum latency of -1, or of Y>=X?

a)
My original reasoning was that it would be -1, because the decoder itself does
not care about how much latency downstream will add, while e.g. the audio
source cares a lot because it continuously fills a ringbuffer that will just
overflow if downstream adds too much latency (i.e. waits too long before
unblocking). However the decoder will have at least X of internal buffering
(how would it otherwise delay by X?), thus would increase the maximum latency
by X as it adds some buffering to the pipeline.
So in my case the decoder will change the maximum latency in the query to X,
i.e. telling downstream [X,X].

b)
I think what you say is that the decoder has maximum latency Y>=X. Because it
has some internal buffering that would allow everything to be delayed more. So
would tell downstream a maximum latency of Y, overall giving [X,Y]. If it had
-1 in your understanding, it would return [X,-1] because -1 would mean that it
has an infinitely large buffer internally.


Now what seems odd to me is that in your case b) the meaning of maximum latency
is different in the source than it is in the decoder. In the source it means
the maximum amount of time that downstream can block until data gets lost. In
the decoder it means the maximum amount of time the decoder can buffer things
before it blocks (and not the maximum amount of time until data gets lost,
because that's infinity for this specific decoder).

I my case the maximum latency has the meaning of the maximum time that
downstream can delay until data gets lost in all cases. However any element
that has internal buffering will add this amount of buffering to it (which is
not the maximum latency, but how much it *improves* or increases the maximum
latency). A decoder in my understanding would have a maximum latency by itself
if it can't block downstream indefinitely without losing data. Which would mean
that the decoder would *reduce* the maximum latency by that amount. And in
general such element will have its own internal buffering, so it would increase
the maximum latency in the query by the amount of buffering, and decrease it by
its own maximum latency.



I'm not sure which of the two versions is correct, but in both cases we would
need to change a lot of code :)
I prefer your version because it is simpler, but OTOH the two different
meanings of maximum latency seems odd to me.

-- 
Configure bugmail: https://bugzilla.gnome.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.


More information about the gstreamer-bugs mailing list