[gst-devel] clocking
in7y118 at public.uni-hamburg.de
in7y118 at public.uni-hamburg.de
Fri Feb 20 05:57:11 CET 2004
Quoting Thomas Vander Stichele <thomas at apestaart.org>:
> > And what should a scheduler do about gst-launch src ! switch_output_on_eos
> !
> > ximagesink switch_output_on_eos0. ! ximagesink
> Well, tell me what this pipeline intends to do :) I can't make much
> sense of what it would achieve.
>
I have no idea why anyone would use such a pipeline and it would just switch
the output to another window upon receiving an EOS. But there's lots of people
that have lots of interesting ideas with gst (starting with Stefan Kost,
passing totem and gstsci and certainly not ending with Gnonlin) I'm pretty
sure something like this will come up somewhere. It's certainly not wrong in
any way I could imagine and this is what counts here.
> all elements that are not locked and not yet in playing but still in
> paused
>
In gst-launch when connecting SOMETIMES pads the sink part is locked until the
connection can be made.
> > In gst-launch filesrc location=file.mp3 ! spider ! videosink spider0. !
> > audiosink the videosink never goes to PLAYING for example.
> Because spider doesn't autoplug visualisations ? Then is this not just a
> broken pipeline ? In my book this pipeline should just fail because it's
> wrong.
>
That pipeline has always just worked and just not output any video.
> > How do you propose to get { audiosrc ! queue } ! audiosink working where
> the
> > sink is set to PLAYING half a second later than the source _on_purpose_?
>
> (I'm assuming the goal of this pipeline is to "play back on some card
> everything that's coming from some card with a delay of half a second).
>
> Two ways:
> a) have the thread and the pipeline have different clocks. The thread
> uses the audiosrc-provided clock, and the main pipeline uses the
> audiosink-provided clock. When audiosrc is set to playing, its clock is
> running and setting correct timestamps on the buffer. The first buffer
> going out will be marked with timestamp 0, which means "play this buffer
> as soon as the clock of the pipeline is playing". After .5 seconds,
> audiosink is set to play, which means its clock (the main pipeline one)
> is now running, and it can immediately output the first buffer with
> timestamp 0 that was already waiting in the queue for .5 seconds. Yes,
> in this case, the two elements have a different concept of the current
> time, *on purpose*. In the thread, the clock marks the recording time.
> In the main pipeline, the clock marks the playing time. They're
> different clocks, and if the synchronization between audiosrc and
> audiosink is perfect (ie, the same device, for example, or externally
> clock-linked using smpte), this will Just Work, and the two, different,
> clocks will always be 0.5 seconds off from each other. If the actual
> hardware devices aren't synced, then they will slowly drift away from
> each other, or the src will catch up with the sink. Those are problems
> to be solved at the application level. But the good thing is, it's easy
> to monitor the clock drift.
>
> b) If you use the same clock for both threads (which wouldn't be the
> default, IMO), then I would say the correct way to do this is to
> - lock the output sink before setting the thread to play
> - the toplevel pipeline sees that all the nonlocked elements are playing
> when you set the thread to play, so it sets the clock to playing
> - the clock here could be provided by either audiosrc or audiosink, and
> in the old system audiosrc gets preference.
> - in front of the audiosink, there would be a "delay" element that does
> nothing else than offset the timestamps on buffers coming in.
>
> So:
> 0.0 sec -> pipeline set to play, first buffer recorded, with timestamp
> 0, and more buffers filling up the queue (queue has to be big enough to
> cross 0.5 secs of course)
> 0.5 sec -> sink unlocked and set to play, sink pulls a buffer, delay
> pulls a buffer from queue, and gets first buffer with timestamp 0.
> delay gives this buffer to sink with timestamp 0.5, audiosink queries
> the clock, sees that it's at 0.5 sec, so it plays the buffer, and
> everything is ok.
>
> Anything wrong with either scenario ?
>
- I'm pretty sure I don't want to use locked state for deciding if a clock
should start running.
- I meant to use the same clock for both elements.
- By default all elements in a pipeline have the same clock. Everything else
is pretty much impossible.
- The timestamp modifier element is a good idea that is certainly the correct
way to do it in this case.
> Ok, so I can't say anything about this since, as I said the semantics
> for discont are not clearly defined. What is discont ? is it a
> discontinuity in the stream, or a discontinuity in the clock ? If it is
> the first, should seeking be a discontinuity ? It looks to me like these
> were mixed where they shouldn't be.
>
A discont describes that the data of the next buffer does not continue the
bytestream where the data of the previous buffer ended. The stream is
discontinued.
Exactly what "continuing the bytestream" means is up to tghe actual bytestream
to define. For audio/raw it means that if sample 357 is not followed by sample
358, then send a discont.
> So all we need
> is some sort of time abstraction that makes time advance together with
> data. Ie, no data flow, no need for the time to increase.
>
> The only time where "absolute" (outside-of-the-box) real life time
> matters is on the borders between "real life" and "the GStreamer
> system"; ie precisely on input and output sinks from actual devices.
>
There is the question what you want to make of async notifications and what
happens when the time giving element stops while other parts of the pipeline
still run. (audio clock hits EOS while video still continues)
> In that respect, I think Wim's ideas for clocking were perfect:
> - when playing, make sure that your pipeline time is advancing just like
> "real life time"
> - when paused, your pipeline time is paused too, even though real life
> time is advancing.
> The pipeline's time is a virtualization of the pipeline's lifecycle,
> backed up/implemented by some way of measuring "real life time" when it
> needs it (ie in playing)
>
Unfortunately this breaks because the PLAYING/PAUSED distinction can not be
made pipeline-wide but only per element.
> Are you saying that all these functions are functions we wouldn't want
> to support in some way or another ? Because getting one-off or repeated
> notifications from the clock seems like a very basic thing we would want
> to have, no ? Anyway, why deprecate functions that we're not sure yet
> we'll throw away or not ? We shouldn't be replacing stuff with something
> we don't know what we'll replace it with yet.
>
I deprecated those functions because they didn't work, didn't have clear
semantics or didn't seem like something that was good from a GStreamer point
of view.
The reason was (and still is) that I don't want anyone telling me during
0.9 "this is a regression, we've always had that, so it must continue to work
even if it's fundamentally flawed" for this stuff.
And since 0.9 will probably have big changes in the scheduling department
(including the question "what to do next?" which is quite fundamental for
async notifications) I don't know what will happen there.
Rest assured that I'm aware of the requirement.
More information about the gstreamer-devel
mailing list