[Bug 744106] Maximum latency handling broken everywhere

GStreamer (bugzilla.gnome.org) bugzilla at gnome.org
Sat Feb 7 10:25:09 PST 2015


https://bugzilla.gnome.org/show_bug.cgi?id=744106
  GStreamer | gstreamer (core) | unspecified

--- Comment #5 from Nicolas Dufresne (stormer) <nicolas.dufresne at collabora.co.uk> 2015-02-07 18:25:05 UTC ---
It's not yet what I mean. The latency as designed (and as documented) is
something that can only make sense per sink. There is no per element
blocking/dropping buffer. Blocking or dropping is only the effect.

What I use to describe the parameters in gst_video_decoder_set_latency() and
similar method is to call it the "contribution" to the global latency.

Also, the decoder and the source latency is exactly the same thing if you stop
describing it in blocks and drops way. Take v4l2 src. The latency is generally
configured to 1 (because we don't now and we want to give it some time, unless
the driver says otherwise, but that's recent). It cannot be zero, since you
have to store the frame entirely before delivering it. If it was more then
zero, the only effect would be that synchronization with other srcs would not
be right. This is the same idea with audio buffer. We will fill an entire fixed
size audio chunk before we let it go which cause latency.

The maximum latency of v4l2src, is the size of the capture queue we allocate.
So it's actually configurable. In your term, if this buffer is bigger, then
downstream can delay longer the rendering. Downstream being to block is the
effect of having a bigger queue inside the src. When the pipeline starts
delaying more, then buffers start to get lost. Depending on the chosen
algorithm to drop buffers, you may endup not rendering a single frame, because
they are all late.

For the v4l2decoder, the min latency is how much buffer the driver will
accumulate before it let the first buffer out. It's the actually the codec
observation size. It impose a minimum buffering. Decoder could be smarter, but
this is what they nearly all do. Remains that the min latency has to be the
worst case, so I think this is right value and we agree.

The decoder maximum latency, in the way I described it is exactly the same a
for v4l2src. Except that it's the total of frames that can be held inside of
the decoder. If the base class was not preventing it, it would be expressed in
term of the maximum between the input and the output queue size. Though we
can't, since the decoder base class prevents this parallelism. That's why all
decoder using that base class must set this [X - X]. A decoder without latency,
would not contribute, as the query is a some, it's natural that these value are
[0 - 0], no [0 infinity/-1]

Now what I'm wondering is if we aren't disagreeing on the meaning of -1. I
always assumed -1 is infinity, not "I don't know". An element that don't know,
should pick something sensible, or assume it's not contributing any latency.

In simple filter cases, where you get a buffer, do transform and let the buffer
go. "I don't know" means I don't contribute to the latency in any ways. Which
is nice, because all it has to do is do nothing (just forward the query).

An element like a decoder, can't pretend to contribute to min latency without
contributing a max latency of at least the same value. That would be a lie. But
considering "I don't know" as infinit is to me a bigger lie.

-- 
Configure bugmail: https://bugzilla.gnome.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.


More information about the gstreamer-bugs mailing list