[gst-devel] clocking

in7y118 at public.uni-hamburg.de in7y118 at public.uni-hamburg.de
Thu Feb 19 07:30:09 CET 2004


Quoting Thomas Vander Stichele <thomas at apestaart.org>:

> > 1) The old system reset the time of the clock when a toplevel pipeline went
> 
> > from PAUSED to PLAYING.
> (I'm assuming you meant PAUSED to READY here - at least that's how I
> remember and how it looks to be from looking at the code).  To me this
> seems correct behaviour.
> 
It was in fact READY=>PAUSED. And resetting a clock based on some arbitrary 
element (The first non-container element to change state to PAUSED) seems 
wrong anyway.

> Why would you need multiple toplevel pipelines *with the same clock* ? A
> clock is connected to a pipeline.  It is possible to set the same clock
> as for one pipeline on a different pipeline, but this should only be
> done when there is good reason to do so - for example, to sink two
> output devices to the same clock even though they're playing something
> different.
> In general, I don't see the need for any current application to have
> this, can you give an example ?
> 
The old system relied on the fact that there is only one system clock.
Don't ask me about the reasons though.

> - READY -> PAUSED should prepare EVERYTHING for data flow, up to the
> moment where it hands the first buffer to rendering elements (output
> sinks), so that they are ready for instantaneous data flow.  This is up
> to the scheduler to see this.
> - data is allowed to flow in the PAUSED state to handle negotation and
> get the first buffer of data to the output sinks.
>
You view this from a much too simplistic point of view, namely the video 
playback view.
Are you starting to send data through a gst-recorder pipeline recorded from a 
webcam when it's set to PAUSED, but not immediately to PLAYING? That way 
you'll end up with a first picture that is completely wrong.
What about streams from the web?
And what should a scheduler do about gst-launch src ! switch_output_on_eos ! 
ximagesink  switch_output_on_eos0. ! ximagesink
With this approach the difference between PAUSED and PLAYING is mainly just 
the question wether the sinks throw incoming data away or not.

> Of course, this is not a change to make right now.  However, for the
> clocking stuff, the solution to me seems rather simple - the clock
> should only start running when the all the elements in the pipeline that
> need to go to PLAYING, are in the PLAYING state.  To do this the
> scheduler can keep track of a count of elements that are in PLAYING, and
> when all elements needing to go to PLAYING are in fact PLAYING, it can
> start the clock.  For going from PLAYING to PAUSED, the reverse could be
> done; the clock can be stopped immediately.
> 
How do you determine "elements that need to go to PLAYING" ?
In gst-launch filesrc location=file.mp3 ! spider ! videosink spider0. ! 
audiosink the videosink never goes to PLAYING for example.
How do you propose to get { audiosrc ! queue } ! audiosink working where the 
sink is set to PLAYING half a second later than the source _on_purpose_?

> I don't think the semantics for discont were clearly defined.  It is
> something that would need to be reviewed anyway.  But your explanation
> of it is a bit vague, could you elaborate on it so I can follow ?
> 
Imagine you have a muxed file where each audio chunk contains 5 seconds of 
audio and each video chunk contains 7 seconds of video (bare with me, the 
values in this example are a bit constructed to show the problem, but believe 
me it's a real problem).
You seek to second 15 on the audio sink. The demuxer finds the next audio to 
start at second 15 and pushes out a DISCONT to second 15. After that it finds 
the next video at second 21 and sends out a DISCONT on the video to second 21.
This needs to be synchronized correctly. Keep in mind that the video data 
might reach the videosink before the audio data reaches the audio sink.

> It makes sense to me to have only one clock for each pipeline - the
> "time" is basically "the playing/process time (since the last reset of
> the pipeline) of a rendering sink inside the pipeline".  It means
> exactly that, and, it's clearly defined.  In cases where you only have
> one rendering sink, this makes it easy.  In cases where you have more
> than one, one rendering sink uses the clock of the other through the
> pipeline, so they end up synchronized.
> 
Again, you're only seeing this from the video playback view. Time in GStreamer 
is not something exclusive to sinks. When recording v4l and audio you need 
time to synchronize those two elements for example though there's not a single 
sink involved.

> As for time being adjustable - like I said, I'm not sure DISCONT was
> clearly defined.  To me it seems like seeking shouldn't be causing a
> clock discont, for example.
> 
The depends on the definition of "clock" and "time of a clock". The old 
definition was "time == timestamp of data to display now". I used this 
definition for element time in my current design and I'm using the common 
definition for the time of clocks (see point 1 in 
http://dictionary.reference.com/search?q=time if you need one, it's hard to 
describe)

> As for the actual clocking stuff, what precisely do you think is
> unsolvable in the old system ? And what are your plans on the deprecated
> API that was useful to others, and the bugs that have been introduced
> because of the changed clocking ?
> 
The old system provided features that were more or less just there by API. 
Like for example asynchronous notification, which does not work reliably at 
all across clocks or different states.
That and the fact that the whole system needs a serious ework made me 
deprecate that stuff so everyone knows it will probably go away during 0.9.
It still works. In fact the current code uses it.
So I wouldn't protest if someone un-deprecated it, but keep in mind that it's 
likely to change in 0.9.


Benjamin




More information about the gstreamer-devel mailing list