[gstreamer-bugs] [Bug 640610] basesink: QoS events are wrong in live pipelines

GStreamer (bugzilla.gnome.org) bugzilla at gnome.org
Wed Jan 26 13:13:51 PST 2011


https://bugzilla.gnome.org/show_bug.cgi?id=640610
  GStreamer | gstreamer (core) | git

--- Comment #10 from Edward Hervey <bilboed at gmail.com> 2011-01-26 21:13:47 UTC ---
Been pondering this some more.

>From part-latency.txt:
 "The latency is the time it takes for a sample captured at timestamp 0 to
reach the sink."

  The Fact is that elements that report latencies can only report latencies for
which they are certain (in the case of an audio/video sources how big the
capture buffers are, in the case of decoders how many buffers they will need to
buffer up before they can decode them, in the case of rtpjitterbuffer how long
it will delay the packets,...).
  Those are *known* latencies (which can be updated, but there are
determined/explicit formulas for calculating them based on the input). They do
*not* depend on cpu speed/availability.
  If you had an element that was doing some processing backed with a dsp with
guaranteed/constant processing time based on input, you *could* report that
latency (audio processing algorithms come to mind here).

  The reason we don't add the processing time is that we can't know for sure
whether that processing time is too much or not (you might be sharing the cpu
with other processes who are doing cpu intensive tasks, the codec might be
going through a high processing phase, ...).

What we *should* end up with is:

sync_time = running_timestamp + reported_delay_latency (+
expected_processing_latency)

Where reported_delay_latency is what the latency GstQuery currently returns.

The problem is :
  How do we calculate that expected_processing_latency over which we need to
start dropping ?

Elements can't figure that out on their own (expect in rare circumstances)
since it would require figuring out the processing complexity based on input
data and available cpu speed.

Therefore we do need to offer a way for pipeline users to provide an
expected_processing_latency.

This will be somewhat tricky, since we'll need to ensure not to *exceed* the
minimum of reported maximum latencies, else we'll end up with live sources
under-running. Or maybe we shouldn't care and have application developer insert
queue elements to compensate for that.

Something like gst_bin_set_additional_min_latency() could do it. And it would
increase the min_latency in bin_query_latency_done().

-- 
Configure bugmail: https://bugzilla.gnome.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.




More information about the Gstreamer-bugs mailing list