[gst-devel] totem and osssink? (long)
Thomas Vander Stichele
thomas at apestaart.org
Thu Mar 11 02:34:03 CET 2004
Hi Martin,
Thanks for the write-up :) It contains some stuff I already knew, and
also some stuff I hadn't figured out myself yet...
Let's go through it...
> For the examples, I will write times in seconds, which are easier to
> read and think off, instead of nanoseconds.
(Off-topic - I think our debugging is very hard to read in this regard;
I'm thinking of changing it all to report time in double seconds, any
opinions on that ? It's especially hard to read now that the clocks
internally use "real" time, not "time since start of program")
> In order to make things a bit easier to program, and since clocks have
> arbitrary base times anyway, elements provide a way to change their
> particular base time. Function gst_element_set_time is used for this
> purpose. So if you say now
>
> gst_element_set_time (elem, 100 * GST_SECOND)
AFAIK this should only be called by the element itself, no ? Certainly
not by apps, and probably not by other parts of the core.
> Discontinuous ("discont") events are used for this purpose. Discont
> events contain a time value. The typical handler for such an event (at
> least in sink elements) looks like this:
>
> case GST_EVENT_DISCONTINUOUS:
> {
> GstClockTime time;
>
> if (gst_event_discont_get_value (event, GST_FORMAT_TIME, &time)) {
> gst_element_set_time (GST_ELEMENT (sink), time);
> }
> }
>
> This means, in principle, all you need to do is send a discont event,
> in order for your sinks to have a consistent time base.
>
> [As far as I understand it, it is not possible at all for two elements
> to synchronize if they don't receive a proper discont event. I thing
> most source elements don't send a discont at start, and that may be a
> cause for programs not working anymore after Benjamin's last changes.]
IIRC, pipelines and clocks used to start at 0, so the elements where
synchronized by default at the start - in theory.
In practice, due to the fact that going from PAUSED to PLAYING could
cause elements to need to do quite a bit of processing (for example,
spider doing autoplugging), this didn't work well, and I believe this is
what prompted Benjamin to change clocking.
What was happening was this:
- pipeline gets set from PAUSED to PLAYING
- all elements thus get set to PLAYING, one by one.
- suppose an audiosink changes to PLAYING before spider
- audiosink activates the clock it provides and sets it to start
counting time
- spider takes a few seconds to start playing
- first few buffers arrive at audiosink with timestamp 0, but audiosink
is already beyond that point, and drops all those buffers
The end result here is that the start of a song got dropped because the
clock is running before data is flowing.
(Correct me if I read any of the source wrong in this regard).
Now, the two possible solutions I could think of were
- change the design so that the state "PAUSED" means "do everything
needed to get the first buffer queued, but blocked, on the rendering
sink" - ie, going to this state would make the scheduler be certain that
all links to rendering sinks contain a buffer. Before this might have
been awkward, now with the negotiation rewrite there actually is a
concept of a link between pads, so it might be possible to tack this
concept on it. Personally, I'd prefer this approach, it seems more
natural to me.
The net effect would be that the operation of going from PAUSED to
PLAYING would be very inexpensive by design (because the framework makes
sure that everything is ready to go to PLAYING immediately). Also, in
practice, the reverse would also be true - going back to PAUSED would be
very light since the data was already flowing.
I think that the only reason why this approach wasn't tried in the past
is because Erik wanted to make absolutely sure no data was flowing in
the PAUSED state. Anyway, I'd like this approach because it's natural,
clear, and seems welldefined to me.
- the other solution would be somewhat similar in concept, but maybe
slightly trickier to get right in code. The idea would be to make sure
the actual clock doesn't start running until all elements inside the
pipeline that matter are PLAYING. Since the state of a bin is defined
to be the highest state of all of its children, you cannot use the state
of the bin to determine this. What the scheduler can do, however, is
keep track of the number of non-locked elements that are set to
PLAYING. Ie, it can see easily if all children that are not locked are
PLAYING. When all of them are, it can tell the clock to start
counting. Again, the net effect is that data starts flowing with
timestamp 0 at the same time the clock starts running.
This still can make going from PAUSED to PLAYING expensive though - but
in a way that doesn't drop buffers from playback.
(Trying out xmms again and noticing how fast it starts up on playback,
I'm wondering if this will ever be possible when using spider if we
would use the second method. Then again, maybe people would argue that
it should typefind first and not use spider later...)
-
>
> Timestamps
> ----------
>
> Timestamps are time values stored in buffers. They are accessible
> through the GST_BUFFER_TIMESTAMP macro. The timestamp in a buffer
> tells the time at which the material in the buffer should start to
> play. [Is this true? I always use the convention that timestamps are
> associated to the start of the buffer, but I haven't seen it written
> anywhere.]
It used to be true in the case where clocks started at 0, ie when they
used "pipeline playing time". Benjamin changed clocks to use system
time, I'm still not sure what this bought us. His argument is that
different pipelines were using the same clock which screwed up stuff,
IIRC (correct me if I'm wrong). Anyway, two pipelines should only use
the same GstClock if they really need to for some reason anyway - ie, if
they are supposed to be perfectly synchronized for some reason. What's
your take ?
I think on a conceptual level it doesn't matter one bit if a clock uses
system time or "time since start of playback" internally; it can be made
to work with both, at least in theory. Using system time just makes it
harder to understand IMO, plus to a pipeline the current system time
doesn't matter anyway, while time since start of playback has meaning
for the system. Anyway, just MO.
> The length of time the material should play is, on the
> other hand, rather determined by the characteristics of the stream
> (like, for example, a PAL video frame should play for 1/25th of a
> second).
There's also GST_BUFFER_DURATION, but I didn't check how much elements
use this. It is important IMO for some applications; take, for example,
subtitles, where you want to express that the buffer of text you're
sending is supposed to be shown from the TIMESTAMP for DURATION length.
> Now, implementing a GstClock based on a sound card output is not that
> difficult. The usual approach is to keep a running count of the number
> of samples written to the card (you update it every time you write any
> data). If you divide that by the sampling rate, you basically obtain
> the playback time since you started writing to the device. Except that
> material written to the sound interface doesn't play immediately,
> because there's usually a hardware buffer. In order to obtain the
> exact playback time, you need to subtract the amount of material
> currently waiting in the hardware buffer. This amount can be obtained,
> for instance, using the ODELAY ioctl in OSS, or the snd_pcm_delay
> function in ALSA.
There's also the question of what to do when the element moves away from
PLAYING. In theory an element providing this clock is supposed to
provide this clock in all states - there is no way for the rest of the
system to know currently if a clock cannot function anymore.
When going to paused, the clock could choose to send 0's to the output
device so it can keep track of time. Or, it can switch to using the
system time internally to keep track of time, which in practice would be
fine, since the change in accuracy/drift doesn't matter much if you're
not playing anything on the device.
Alternatively, we can have helper functions to do just this (have the
system clock take over), or even implement a priority system for clocks,
so that - based on the accuracy/drift a clock reports, the most correct
clock is chosen by default. All we need to do then is to make sure that
"clock jumps" are handled correctly by temporarily letting the two
clocks exchange information, with a gst_clock_jump (clock1, clock2)
function for example.
> Our current solution [which is actually a very clever hack from
> Benjamin, don't take me wrong here] works in sort of a "snap to grid"
> fashion. GstClock objects provide a gst_clock_get_event_time
> function. The value of gst_clock_get_event_time is usually identical
> to the value of gst_clock_get_time, i. e. it is the current clock
> time. However, if you invoke gst_clock_get_event_time twice in a short
> interval (how short is determined by the max-diff property in the
> clock object, whose default value is 2 seconds) you receive exactly
> the same value, namely, the time of the first invocation.
Hm, that is a neat hack in some cases, and pretty ugly in others :) I
didn't know this worked as you now explained, but it's good to now that
it's there :)
I think I understand the problem. I also think I may have an elegant
solution, but smack me if it's wrong.
Currently a discont event sends the new time to use as a base time,
right ? Here's my idea - have the discont event contain both the new
time to use as base time, as well as the ACTUAL clock time at the moment
of deciding what base time to use in the discont event. This maps the
old GstClock time to the new base time elements have to use.
When the discont event then reaches the element, the element can query
the current clocktime, and add the difference between current time and
event's clock time to the base time it has to set.
Would this work ?
Thomas
Dave/Dina : future TV today ! - http://www.davedina.org/
<-*- thomas (dot) apestaart (dot) org -*->
if you ever lay a finger on my left side
if you ever lay a finger on me I will open
<-*- thomas (at) apestaart (dot) org -*->
URGent, best radio on the net - 24/7 ! - http://urgent.fm/
More information about the gstreamer-devel
mailing list