[gst-devel] input sync
rbultje at ronald.bitfreak.net
Wed Mar 12 10:07:18 CET 2003
(trying to grab some ideas from the list)
On my way home (bad ideas always come when you don't expect them), I was
thinking of input A/V sync handling in Gst.
( move to "--" if you don't care )
As we all know, output sync is handled by (in case of video) simply
waiting a certain time for the (gst)clock, which is (supposed to be)
mastered by the soundcard (it gets a buffer of length L and plays it,
syncs and then tells the clock that we moved L time further). This
assumes that input is actually in sync (which is usually the case in
Now, let's see input. Just as with output, system clock and soundcard
clock might not tell the same story, neither do sound and video input
timings. This means that input streams are not in sync.
Current situation is that input streams either use timestamp calculating
(osssrc, for example), which means that we calculate the timestamp by
(number_of_samples_pushed_until_now) * (time_per_sample), or that we use
the system clock (v4l*src), which means that after receiving a frame
from the kernel, we do gettimeofday() to get the timestamp and take the
first frame's timestamp to get back to 0 as time for the first
timestamp. This breaks in case of PLAYING->PAUSE->PLAYING, there's
crappy workarounds for this but that's not the issue. The issue is that
both are incomplete, we currently need gstrec-avsync to correct for
this. The elements should drop/insert frames internally to make sure
that the video output by v4l*src and the audio output by osssrc (etc.)
are together in sync.
So how do we handle this? There's a few scenario's:
A) use gstrec-avsync. Hacky. Crap. Bad. Elements should do
synchronization internally. v4l*src should make sure that it syncs to
the osssrc clock or so. This is actually why I want to get rid of this
after all and move it back into the elements themselves. gstrec-avsync
is a nice idea but too much a hack and too little a real solution.
B) we set osssrc as the master clock and make v4l*src sync to it. The
problem here is when we do osssrc device=/dev/dsp0 + v4lmjpegsrc
device=/dev/video0 -> osssink device=/dev/dsp1 + v4lmjpegsink
device=/dev/video1, where both osssrc and osssink set a master clock and
that will break. Maybe we want to allow for both an input and an output
clock? In that case, this'd work quite good.
C) some other element does synchronization (e.g. avimux). This is bad,
D) v4l*src should use frame calculation and calculate the 'real' fps and
use that in gst_*_convert() functions. hacky, funny solution. Is that
what we want? It prevents framedrops, but is not necessarily compatible
with all container formats. Besides, should osssrc do that too? That
becomes complicated. We usually 'just' want 25 fps and 44100Hz audio, so
using non-standard calculated fps/rate values is a bit
E) some other elements sets the master clock. Is this actually doable?
Any other option I forgot?
What do people feel about this. What would be a good way to deal with
this? In the ideal case, v4l*src would drop/insert frames internally so
that the output from v4l*src is directly sync'ed to the sound. I'm
personally in favour of B, and then use both an input and an output
master clock. This breaks compatibility so it'd be 0.7.x only, but it'd
be a nice solution, and probably architecturally the best one.
Comments, other stuff, anything? Or should I start implementing this?
Ronald Bultje <rbultje at ronald.bitfreak.net>
More information about the gstreamer-devel