[Bug 755125] New: rtp: RTCP mapping between NTP and RTP time could be capture or send time based

GStreamer (GNOME Bugzilla) bugzilla at gnome.org
Wed Sep 16 10:16:06 PDT 2015


https://bugzilla.gnome.org/show_bug.cgi?id=755125

            Bug ID: 755125
           Summary: rtp: RTCP mapping between NTP and RTP time could be
                    capture or send time based
    Classification: Platform
           Product: GStreamer
           Version: git master
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: Normal
         Component: gst-plugins-good
          Assignee: gstreamer-bugs at lists.freedesktop.org
          Reporter: slomo at coaxion.net
        QA Contact: gstreamer-bugs at lists.freedesktop.org
                CC: wim.taymans at gmail.com
     GNOME version: ---

RFC3550 defines that the NTP time in the SR should be the NTP time when the SR
is sent. So in practice this means based on the current clock time as we don't
have any latency downstream of rtpbin to the RTCP sinks, and RTCP sinks are not
synchronizing anyway.


Now for the RTP time the RFC just says that it should be the same as the NTP
time, which could have two meanings:
1) is it the RTP time we are going to send now? (we are doing this currently)
or
2) is it the RTP time we are currently capturing?

The difference between the two is latency on the sender is the latency L1
between rtpbin and the RTP sink (i.e., usually the pipeline latency minus the
upstream latency for this stream) plus latency L2 between rtpbin and the source
that produces the RTP stream.


I think it would be good to add a property to allow 2), as this would allow the
receiver to infer the time when the media was captured. This would be
interesting for example for the attached example:
- gst-rtsp-server has a pipeline that has a timeoverlay painting the current
clock time into the frames
- the client paints the running_time+base_time (= clock time when this frame is
synced without the client latency) into the frames
- client and server use the same clock, and synchronize each other via RTCP
- server pipeline has a latency of 1s, receiver pipeline has a latency of 1.5s
(statically configured)

In theory both timestamps in the frame would be expected to be the same, with
2) they are. With 1) the difference between both timestamps is L1+L2 (==1s).


Now the main problem for implementing 2) is code in
gstrtpjitterbuffer.c:do_handle_sync(). It does not allow the rtptime to be more
than 1 second in the future compared to what it currently receives. However
with 2) and a sender latency of more than 1 second, this would happen.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.


More information about the gstreamer-bugs mailing list