[Bug 711412] rtpjitterbuffer: Automatically calculate RTX properties based on RTT
GStreamer (bugzilla.gnome.org)
bugzilla at gnome.org
Fri Nov 22 06:45:00 PST 2013
https://bugzilla.gnome.org/show_bug.cgi?id=711412
GStreamer | gst-plugins-good | unspecified
Wim Taymans <wim.taymans> changed:
What |Removed |Added
----------------------------------------------------------------------------
Attachment #260461|none |needs-work
status| |
--- Comment #4 from Wim Taymans <wim.taymans at gmail.com> 2013-11-22 14:44:55 UTC ---
(From update of attachment 260461)
> #define DEFAULT_MODE RTP_JITTER_BUFFER_MODE_SLAVE
> #define DEFAULT_PERCENT 0
> #define DEFAULT_DO_RETRANSMISSION FALSE
>-#define DEFAULT_RTX_DELAY 20
>+#define DEFAULT_RTX_DELAY -1
> #define DEFAULT_RTX_DELAY_REORDER 3
>-#define DEFAULT_RTX_RETRY_TIMEOUT 40
>-#define DEFAULT_RTX_RETRY_PERIOD 160
>+#define DEFAULT_RTX_RETRY_TIMEOUT -1
>+#define DEFAULT_RTX_RETRY_PERIOD -1
>+
>+#define DEFAULT_AUTO_RTX_DELAY 20
>+#define DEFAULT_AUTO_RTX_TIMEOUT 40
>+#define DEFAULT_AUTO_RTX_PERIOD 120
> /* calculate expected arrival time of the next seqnum */
> expected = dts + priv->packet_spacing;
>- delay = priv->rtx_delay * GST_MSECOND;
>+ if (priv->rtx_delay == -1)
>+ if (priv->avg_rtx_rtt == 0)
>+ delay = (priv->avg_jitter * 2 + DEFAULT_AUTO_RTX_DELAY) * GST_MSECOND;
>+ else
>+ delay = (priv->avg_jitter * 2 + priv->avg_rtx_rtt) * GST_MSECOND;
The delay is based only on the measured jitter.
>+ else
>+ delay = priv->rtx_delay;
This is missing a * GST_MSECOND;
>+ int calculated_timeout;
>+ int calculated_period;
>+
>+ if (priv->rtx_retry_timeout == -1)
>+ if (priv->avg_rtx_rtt == 0)
>+ calculated_timeout = DEFAULT_AUTO_RTX_TIMEOUT + priv->avg_jitter * 2;
>+ else
>+ calculated_timeout = priv->avg_rtx_rtt * 2 + priv->avg_jitter * 2;
>+ else
>+ calculated_timeout = priv->rtx_retry_timeout;
This does not depend on the jitter.
>+
>+ if (priv->rtx_retry_period == -1)
>+ if (priv->avg_rtx_rtt == 0)
>+ calculated_period = DEFAULT_AUTO_RTX_PERIOD + priv->avg_jitter * 2;
>+ else
>+ calculated_period = priv->avg_rtx_rtt + priv->avg_jitter * 2;
>+ else
>+ calculated_period = priv->rtx_retry_period;
This is not the period.. the period to try retransmission is based on the
jitterbuffer size and the RTT.
>@@ -2467,8 +2509,8 @@ do_expected_timeout (GstRtpJitterBuffer * jitterbuffer, TimerData * timer,
> "running-time", G_TYPE_UINT64, timer->rtx_base,
> "delay", G_TYPE_UINT, GST_TIME_AS_MSECONDS (delay),
> "retry", G_TYPE_UINT, timer->num_rtx_retry,
>- "frequency", G_TYPE_UINT, priv->rtx_retry_timeout,
>- "period", G_TYPE_UINT, priv->rtx_retry_period,
>+ "frequency", G_TYPE_UINT, calculated_timeout,
>+ "period", G_TYPE_UINT, calculated_period,
> "deadline", G_TYPE_UINT, priv->latency_ms,
> "packet-spacing", G_TYPE_UINT64, priv->packet_spacing, NULL));
This is the only place where you use the calculated values. Where it actually
matters (a few lines below) you use the rtx_retry_period and rtx_retry_timeout,
which could be -1.
>--- a/gst/rtpmanager/gstrtpsession.c
>+++ b/gst/rtpmanager/gstrtpsession.c
>@@ -404,6 +404,29 @@ on_sender_timeout (RTPSession * session, RTPSource * src, GstRtpSession * sess)
> src->ssrc);
> }
>
>+static void
>+on_sending_rtcp (RTPSession * rtp_session, GstBuffer * buffer, gboolean early,
>+ GstRtpSession * session)
>+{
>+ GstPad *recv_rtp_sink;
>+ GST_RTP_SESSION_LOCK (session);
>+ if ((recv_rtp_sink = session->recv_rtp_sink))
>+ gst_object_ref (recv_rtp_sink);
>+ GST_RTP_SESSION_UNLOCK (session);
>+
>+ if (recv_rtp_sink) {
>+ GstEvent *event;
>+ guint jitter;
>+ g_object_get (rtp_session, "average-jitter", &jitter, NULL);
>+ GST_DEBUG ("Sending rtcp, with jitter of %u", jitter);
>+ event = gst_event_new_custom (GST_EVENT_CUSTOM_UPSTREAM,
>+ gst_structure_new ("GstRTPJitterCalculation", "jitter", G_TYPE_UINT,
>+ jitter, NULL));
>+ gst_pad_push_event (recv_rtp_sink, event);
>+ gst_object_unref (recv_rtp_sink);
>+ }
>+}
Now you'll be sending this event whenever the session produces RTCP. You could
also produce the event when the session receives RTCP. Both are probably not
very optimal to get the most recent jitter estimation in the jitterbuffer. I'm
starting to think that the jitterbuffer should just calculate the jitter
itself (again, yes but the great plan is to merge the jitterbuffer in the
RTPSource again at some point).
>--- a/gst/rtpmanager/rtpsession.c
>+++ b/gst/rtpmanager/rtpsession.c
>@@ -2858,6 +2869,9 @@ session_report_blocks (const gchar * key, RTPSource * source, ReportData * data)
> source->last_rr.lsr = lsr;
> source->last_rr.dlsr = dlsr;
>
>+ data->total_jitter += jitter;
>+ data->count_jitter++;
>+
This is not where the jitter is calculated.. The jitter is calculated in each
RTPSource
when it receives a packet and rtp_source_process_rtp() is called.
For all non-internal SSRCs that are senders (you received packets from the
remote SSRC),
you need to pass the jitter calculation to the jitterbuffer. The most easy way
to do that
is when you receive an SR from the SSRC, then you take the jitter, wrap it in
an event
and send it to the jitterbuffer.
jitterbuffer.
--
Configure bugmail: https://bugzilla.gnome.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.
More information about the gstreamer-bugs
mailing list