[Bug 744106] Maximum latency handling broken everywhere
bugzilla at gnome.org
Sun Feb 8 01:11:48 PST 2015
GStreamer | gstreamer (core) | unspecified
--- Comment #15 from Sebastian Dröge (slomo) <slomo at coaxion.net> 2015-02-08 09:11:41 UTC ---
(In reply to comment #12)
> That said, I'd really like to see Wim's input as he designed this latency thing
> in the first place and put the max=-1 in basesrc.
Same here, and the max=-1 in basesrc is one of the things that currently seem
wrong to me. Which is why I CCd Wim in this bug from the beginning :) I'll talk
to him tomorrow.
About the queues I agree, and also otherwise we seem to agree?
Regarding leaky queues... audioringbuffer is also one, and we have it in every
audio source and sink. v4l2src also internally has a leaky queue (it allocates
a maximum number of buffers, if downstream is blocked for more than that it
can't capture anything for a while and behaves like a leaky-upstream queue).
(In reply to comment #13)
> All this seems to make sense now, except for the max=-1 in basesrc. Let's wait
> for Wim's comment. Meanwhile:
> > max = MIN (upstream_max, own_max)
> My impression was that we should completely ignore upstream max, and I can't
> see why it make a difference if upstream_max is smaller then own max.
Why should we completely ignore upstream max? In which cases? If any element
has unlimited buffering? Maybe, maybe not. I'm not entirely sure.
Consider the case of
> unbound-queue ! leaky-queue ! some-element ! unbound-queue
where leaky-queue has X max latency. Now if some-element blocks for more than
X, things will go wrong. But some-element would have no information about that
in your case, because the unbound queue in the beginning would make max latency
infinite. So some-element could just decide to wait >X on its sinkpad before
unblocking. An example of such a element could be any aggregator based element,
which also has other sinkpads and on the other sinkpads has a min latency >X.
There would be no way to detect inside the element that the latency
configuration is impossible (aggregator has max<min detection code like
GstBin), and also GstBin would have no way to know this as it will get a
infinite maximum latency for this part of the pipeline.
(In reply to comment #14)
> Maybe I should mention something:
> ... ! decoder ! queue ! sink
> It is not because the queue thinks it can accumulate N ms of data (hence
> contribute to max latency), that it will be possible for the queue to be
> filled. Buffer pools might have maximum of buffers that prevent this. I think
> it's all linked together. I wonder what should be the behaviour of the latency
> query/message from the decoder (regardless if it's is own or downstream, the
> decoder decide the allocation and pool).
Yes, the allocation query with the min/max buffers count seems to somehow be
connected to the latency too. But I think it's a separate problem that can be
solved independently. Basically the max buffers count could limit the maximum
latency of the element, depending on what other buffering mechanisms the
Configure bugmail: https://bugzilla.gnome.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.
More information about the gstreamer-bugs