RTP audio packets are sent as a burst, not in real-time, with rawaudioparse
wegherfe at gmail.com
Tue May 2 07:21:56 UTC 2017
Thanks for your reply. I analysed the timestamps and indeed the problem is
due to the improper use of rawaudioparse with a source which is not
complete, like a file for example, but rather live.
By debugging udpsink, I noticed that during second play audio packets are
marked as "too late" and, instead of being dropped (max-lateness=-1), they
are sent out of sync as a burst. This happens because there are no silence
audio data between the two audio playbacks, so timestamps of audio samples
of second play are written as consecutive by rawaudioparse, as if there were
no pause between the two plays (as those data come from a file). But in the
meantime the system clock went on, so udpsink detects those samples as too
Is it possible that rawaudioparse cannot be used with live sources?
At the moment I am working on a new pipeline, without rawaudioparse, with
property sync=false for udpsink and with appsrc, sending audio frames in
realtime (by queuing data and scheduling the push-buffers in time for rt
behavior). I will be back with results asap.
I have already opened a bug there. Please look at:
View this message in context: http://gstreamer-devel.966125.n4.nabble.com/RTP-audio-packets-are-sent-as-a-burst-not-in-real-time-with-rawaudioparse-tp4682828p4682872.html
Sent from the GStreamer-devel mailing list archive at Nabble.com.
More information about the gstreamer-devel