[gst-devel] clocking

Thomas Vander Stichele thomas at apestaart.org
Thu Feb 19 09:33:07 CET 2004


On Thu, 2004-02-19 at 16:24, in7y118 at public.uni-hamburg.de wrote:
> Quoting Thomas Vander Stichele <thomas at apestaart.org>:
> 
> > > 1) The old system reset the time of the clock when a toplevel pipeline went
> > 
> > > from PAUSED to PLAYING.
> > (I'm assuming you meant PAUSED to READY here - at least that's how I
> > remember and how it looks to be from looking at the code).  To me this
> > seems correct behaviour.
> > 
> It was in fact READY=>PAUSED.

Yep.  So this part was ok for you ?

>  And resetting a clock based on some arbitrary 
> element (The first non-container element to change state to PAUSED) seems 
> wrong anyway.

gst_clock_reset was only called when the scheduler's parent element (ie,
the pipeline/thread connected to the scheduler) was doing READY->PAUSED
(under the assumption that this scheduler was toplevel).  This almost
boils down to the same, I guess.  Anyway, if this is a problem, it could
be changed to "clock gets reset when all non-locked elements have made
the jump to PAUSED".


> > Why would you need multiple toplevel pipelines *with the same clock* ? A
> > clock is connected to a pipeline.  It is possible to set the same clock
> > as for one pipeline on a different pipeline, but this should only be
> > done when there is good reason to do so - for example, to sink two
> > output devices to the same clock even though they're playing something
> > different.
> > In general, I don't see the need for any current application to have
> > this, can you give an example ?
> > 
> The old system relied on the fact that there is only one system clock.
> Don't ask me about the reasons though.

That wasn't really an answer :) Unless you meant by that answer I should
understand "since there's only one system clock, and I want to run two
pipelines, I need to use the same system clock for both pipelines and
that's a problem."
Anyway, it looks to me like the object that is called SystemClock is
just one possible clock implementation, using g_get_current_time as its
mechanism to keep internal time.  So, I don't see why you can't just
create two different instances of this, one for each pipeline.  In the
general case there's no reason to force these clocks to be the same.  Am
I missing something ?


> > - READY -> PAUSED should prepare EVERYTHING for data flow, up to the
> > moment where it hands the first buffer to rendering elements (output
> > sinks), so that they are ready for instantaneous data flow.  This is up
> > to the scheduler to see this.
> > - data is allowed to flow in the PAUSED state to handle negotation and
> > get the first buffer of data to the output sinks.
> >
> You view this from a much too simplistic point of view, namely the video 
> playback view.
> Are you starting to send data through a gst-recorder pipeline recorded from a 
> webcam when it's set to PAUSED, but not immediately to PLAYING? That way 
> you'll end up with a first picture that is completely wrong.
It is up to the source element to decide this.  The job for the element
is "get everything ready so that as soon as you get set to play you can
process data".  So for a webcam, I would say the best behaviour would be
to open the device and get ready for passing on data", and nothing more.

> What about streams from the web?
I'd say it should start reading from the web, and prebuffering, unless
you'd prefer the prebuffering to be handled in a gstreamer way (ie, with
queues and so on).  In the second case, it should probably be done with
a thread, the queue would act like the actual provider, and the queue
should be the one to make sure that it has enough data to be ready to
play whenever.  Ie, it would best fill up its queue first getting data
from the other thread containing the webreading element.  Maybe this
would need a special kind of queue.

> And what should a scheduler do about gst-launch src ! switch_output_on_eos ! 
> ximagesink  switch_output_on_eos0. ! ximagesink
Well, tell me what this pipeline intends to do :) I can't make much
sense of what it would achieve.


> > Of course, this is not a change to make right now.  However, for the
> > clocking stuff, the solution to me seems rather simple - the clock
> > should only start running when the all the elements in the pipeline that
> > need to go to PLAYING, are in the PLAYING state.  To do this the
> > scheduler can keep track of a count of elements that are in PLAYING, and
> > when all elements needing to go to PLAYING are in fact PLAYING, it can
> > start the clock.  For going from PLAYING to PAUSED, the reverse could be
> > done; the clock can be stopped immediately.
> > 
> How do you determine "elements that need to go to PLAYING" ?
all elements that are not locked and not yet in playing but still in
paused

> In gst-launch filesrc location=file.mp3 ! spider ! videosink spider0. ! 
> audiosink the videosink never goes to PLAYING for example.
Because spider doesn't autoplug visualisations ? Then is this not just a
broken pipeline ? In my book this pipeline should just fail because it's
wrong.

> How do you propose to get { audiosrc ! queue } ! audiosink working where the 
> sink is set to PLAYING half a second later than the source _on_purpose_?

(I'm assuming the goal of this pipeline is to "play back on some card
everything that's coming from some card with a delay of half a second).

Two ways:
a) have the thread and the pipeline have different clocks.  The thread
uses the audiosrc-provided clock, and the main pipeline uses the
audiosink-provided clock.  When audiosrc is set to playing, its clock is
running and setting correct timestamps on the buffer.  The first buffer
going out will be marked with timestamp 0, which means "play this buffer
as soon as the clock of the pipeline is playing".  After .5 seconds,
audiosink is set to play, which means its clock (the main pipeline one)
is now running, and it can immediately output the first buffer with
timestamp 0 that was already waiting in the queue for .5 seconds.  Yes,
in this case, the two elements have a different concept of the current
time, *on purpose*.  In the thread, the clock marks the recording time. 
In the main pipeline, the clock marks the playing time.  They're
different clocks, and if the synchronization between audiosrc and
audiosink is perfect (ie, the same device, for example, or externally
clock-linked using smpte), this will Just Work, and the two, different,
clocks will always be 0.5 seconds off from each other.  If the actual
hardware devices aren't synced, then they will slowly drift away from
each other, or the src will catch up with the sink.  Those are problems
to be solved at the application level.  But the good thing is, it's easy
to monitor the clock drift.

b) If you use the same clock for both threads (which wouldn't be the
default, IMO), then I would say the correct way to do this is to
- lock the output sink before setting the thread to play
- the toplevel pipeline sees that all the nonlocked elements are playing
when you set the thread to play, so it sets the clock to playing
- the clock here could be provided by either audiosrc or audiosink, and
in the old system audiosrc gets preference.
- in front of the audiosink, there would be a "delay" element that does
nothing else than offset the timestamps on buffers coming in.

So:
0.0 sec -> pipeline set to play, first buffer recorded, with timestamp
0, and more buffers filling up the queue (queue has to be big enough to
cross 0.5 secs of course)
0.5 sec -> sink unlocked and set to play, sink pulls a buffer, delay
pulls a buffer from queue, and gets first buffer with timestamp 0. 
delay gives this buffer to sink with timestamp 0.5, audiosink queries
the clock, sees that it's at 0.5 sec, so it plays the buffer, and
everything is ok.

Anything wrong with either scenario ?

> > I don't think the semantics for discont were clearly defined.  It is
> > something that would need to be reviewed anyway.  But your explanation
> > of it is a bit vague, could you elaborate on it so I can follow ?
> > 
> Imagine you have a muxed file where each audio chunk contains 5 seconds of 
> audio and each video chunk contains 7 seconds of video (bare with me, the 
> values in this example are a bit constructed to show the problem, but believe 
> me it's a real problem).
> You seek to second 15 on the audio sink. The demuxer finds the next audio to 
> start at second 15 and pushes out a DISCONT to second 15. After that it finds 
> the next video at second 21 and sends out a DISCONT on the video to second 21.
> This needs to be synchronized correctly. Keep in mind that the video data 
> might reach the videosink before the audio data reaches the audio sink.
Ok, so I can't say anything about this since, as I said the semantics
for discont are not clearly defined.  What is discont ? is it a
discontinuity in the stream, or a discontinuity in the clock ? If it is
the first, should seeking be a discontinuity ? It looks to me like these
were mixed where they shouldn't be.

Personally, I don't think a seek on an input stream should necessarily
trigger a discont.  It should trigger a flush, then proceed by sending
data from the new point, and the clock should just go on as if nothing
happened.  Ie, it's up to the decoder/demuxer (or, rather, feeding
pipeline) to make this look seamless.

To me it seems like DISCONT was really supposed to mean "make the clock
jump non-linearly" - though I'm still not yet sure what situations need
that.

I might be misunderstanding DISCONT, so please explain to me what
according to you it's intending to do, so I can follow what your example
wants to do.

> > It makes sense to me to have only one clock for each pipeline - the
> > "time" is basically "the playing/process time (since the last reset of
> > the pipeline) of a rendering sink inside the pipeline".  It means
> > exactly that, and, it's clearly defined.  In cases where you only have
> > one rendering sink, this makes it easy.  In cases where you have more
> > than one, one rendering sink uses the clock of the other through the
> > pipeline, so they end up synchronized.
> > 
> Again, you're only seeing this from the video playback view. Time in GStreamer 
> is not something exclusive to sinks. When recording v4l and audio you need 
> time to synchronize those two elements for example though there's not a single 
> sink involved.

I never said only sinks matter.  In fact, the clocking documentation
from 0.6 clearly states "if there are src clocks, use those.  Else if
there are sink clocks, use those.  else use a system clock."  In the
case of v4l and audiosrc, the audiosrc would be the clock provider, and
v4l would be using it to sync.

> 
> > As for time being adjustable - like I said, I'm not sure DISCONT was
> > clearly defined.  To me it seems like seeking shouldn't be causing a
> > clock discont, for example.
> > 
> The depends on the definition of "clock" and "time of a clock". The old 
> definition was "time == timestamp of data to display now". I used this 
> definition for element time in my current design and I'm using the common 
> definition for the time of clocks (see point 1 in 
> http://dictionary.reference.com/search?q=time if you need one, it's hard to 
> describe)
Yeah, I know what you mean.  I'm just not sure it's all that important
to use/know the actual "absolute" time or some approximation of it. 
Basically, the concept of time inside the pipeline and synchronization
is only necessary when there is something to perceive. (I'm finding it
hard to explain my thoughts accurately on this matter :)) So all we need
is some sort of time abstraction that makes time advance together with
data.  Ie, no data flow, no need for the time to increase.

The only time where "absolute" (outside-of-the-box) real life time
matters is on the borders between "real life" and "the GStreamer
system"; ie precisely on input and output sinks from actual devices.

In that respect, I think Wim's ideas for clocking were perfect:
- when playing, make sure that your pipeline time is advancing just like
"real life time"
- when paused, your pipeline time is paused too, even though real life
time is advancing.
The pipeline's time is a virtualization of the pipeline's lifecycle,
backed up/implemented by some way of measuring "real life time" when it
needs it (ie in playing)

> > As for the actual clocking stuff, what precisely do you think is
> > unsolvable in the old system ? And what are your plans on the deprecated
> > API that was useful to others, and the bugs that have been introduced
> > because of the changed clocking ?
> > 
> The old system provided features that were more or less just there by API. 
> Like for example asynchronous notification, which does not work reliably at 
> all across clocks or different states.
> That and the fact that the whole system needs a serious ework made me 
> deprecate that stuff so everyone knows it will probably go away during 0.9.

Are you saying that all these functions are functions we wouldn't want
to support in some way or another ? Because getting one-off or repeated
notifications from the clock seems like a very basic thing we would want
to have, no ? Anyway, why deprecate functions that we're not sure yet
we'll throw away or not ? We shouldn't be replacing stuff with something
we don't know what we'll replace it with yet.

Thomas

Dave/Dina : future TV today ! - http://www.davedina.org/
<-*- thomas (dot) apestaart (dot) org -*->
Baby no matter what love's got to offer
I burn myself down to the ground
<-*- thomas (at) apestaart (dot) org -*->
URGent, best radio on the net - 24/7 ! - http://urgent.fm/






More information about the gstreamer-devel mailing list