Transmit absolute time
Carlos Rafael Giani
dv at pseudoterminal.org
Wed Oct 14 15:45:54 PDT 2015
Here's how I achieved something similar.
First, I synchronized the clocks of all devices. This was before
GStreamer 1.6, so GstPTPClock wasn't available. I'd probably use this
one now. Instead, I used the GstNetClientClock. On the sender, I started
a GstNetTimeProvider, and installed a monotonic GstSystemClock as the
sender's pipeline clock. The net time provider provided this clock to
the receivers. On the receivers, I used GstNetClientClocks as the
pipeline clocks.
Since in my case, both sender and receiver were under my full control, I
opted for adding a header extension to the RTP packets. On the sender
side, I first converted the buffer presentation timestamps to running
time using gst_segment_to_running_time(), and then added the
running-time to these converted timestamps. In short: I moved the
timestamps into clock-time. On the receiver side, I made sure the
pipeline doesn't set the base-time of the contained elements by calling
gst_element_set_start_time(pipeline, GST_CLOCK_TIME_NONE). This makes
sure all elements have base-time 0. Since running-time = clock-time -
base-time, this means that running-time = clock-time. And because the
timestamps the sender put in these header extensions, all I have to do
at the receiver is to pull out the timestamps from these headers (I
ignore the RTP timestamps), and put them into the corresponding
GstBuffers, effectively restoring their timestamps just like they were
at the sender's side. I do not use RTCP or rtpbin, because it is
unnecessary here. This system is relatively simple, super accurate, and
robust. Working in clock-time is the main trick, because by doing that,
I eliminate the need for distributing a base-time somehow (otherwise,
sender and receiver timestamps will have different base-times, making
synchronization impossible).
As for the time between devices: you don't actually need to know it. All
you have to do is to add a fixed delay, effectively "moving the
timestamps into the future". In a basic setup, you add a fixed delay D
to your timestamp T in the sender. In regular playback, T, together with
the base-time, will be a timestamp that is in the present. In other
words, the delta between T and the current time is very close to 0. So,
T + D = Tf . Tf is now "in the future" by an interval D. You use this
timestamp both for the local playback and for the transmissions. Since
the sender's and receiver's clocks are synchronized, the packet with the
timestamp Tf will arrive at the receiver before the clock reaches Tf;
once the packet arrives at the receiver, its timestamp still is "in the
future". When timestamps are in the future, sinks wait. And because both
pipeline clocks are synchronized, both sinks will wait until the same
time (Tf) is reached, and they both will start playing at the same time.
This waiting happens only initially of course.
In practice, you may not need to add D to the timestamps of the buffers
that you send out (you do still need to do that for the local playback
at the sender), since the rtpjitterbuffer can be used to that end. It
adds latency to the pipeline, which does the same - it causes the
pipeline to add an extra time (the latency time) to timestamps. So, if
you set the length of the rtpjitterbuffer to D, you get the same effect.
This method eliminates any need for knowing the transport delays between
devices. One drawback is that you need to delay output, which might not
be doable for captured video in some cases, unless perhaps you can delay
it at a compressed level. If for example you can delay h.264 playback
before the frames get decoded, it might be possible, because much less
RAM would be used. Another drawback is that you need to pick a fixed
delay D, which will be the maximum tolerable delay. Pick too high a
value, and the delay will be noticeable (and potentially cause a buildup
of a lot of not-yet-showed frames). Pick too low a value, and it will be
exceeded by spikes in the transport delay, causing synchronization
failures every now and then.
Am 2015-10-13 um 16:34 schrieb Pietro Bonfa':
> Hi,
>
> I'm trying to rewrite an application which deals with synchronized
> videos using gstreamer.
> I have a series of raspberries synchronized with a precise time protocol
> server.
> In my old application I was able to send frames with time-stamps related
> to the global clock. Is there a way to do something similar with
> gstreamer? This looks to me the only (accurate and reliable) way of
> estimating the time differences between the various frames.
>
> I tried other approaches (rtsp for example) but I cannot obtain the same
> accuracy (better than 5ms) that I have with my original code.
>
> Other suggestions are greatly appreciated.
>
> Thanks,
> Pietro
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
More information about the gstreamer-devel
mailing list