Adding local delay to pipeline in parallel to RTP output
dv
dv at pseudoterminal.org
Wed Jan 23 04:00:59 PST 2013
Hello,
I have a pipeline for sending audio data over RTP. I use rtpbin for
that. Synchronization works fine, receivers and sender are synchronized
using the net time provider and the RTP bins.
However, I now also want to play the audio locally, on the sender side,
and wish to play it synchronous to the receivers. To that end, I
introduced a tee element right before the sender's RTPbin,
and now, there are two branches connected to two tee srcpads. One is the
existing one with the RTP bin and the UDP sinks, the other contains a
local audio sink. The question is, how to delay audio data for this
local audio sink? It should delay the data by precisely the same number
of milliseconds the RTP bin uses for its jitter buffer size.
(I am using GStreamer 0.10, since there is no budget in this project for
migrating to 1.0 yet.)
I tried the local queue (max-size-time = milliseconds,
use-buffering=TRUE), but I only got stuttering audio playback. The
ts-offset property in the local audio sink also did not produce the
expected result. I also tried to write an element that simply subbuffers
incoming buffers and adds an offset to their timestamps, which only
works if the pipeline is simple (= no tee element, no branching). I used
gst_buffer_make_metadata_writable() for the task. Perhaps I should
explicitely copy the buffers?
Currently, what works best is to split the delay in half, one half is
done by a queue (not a queue2), the other half is done by the ts-offset
property. However, it is not fully accurate, and this setup is just
plain weird. Does anybody have better ideas?
More information about the gstreamer-devel
mailing list