[gst-devel] totem and osssink? (long)
Martin Soto
soto at informatik.uni-kl.de
Thu Mar 11 05:55:07 CET 2004
Hi Thomas,
On Thu, 2004-03-11 at 11:20, Thomas Vander Stichele wrote:
> > For the examples, I will write times in seconds, which are easier to
> > read and think off, instead of nanoseconds.
> (Off-topic - I think our debugging is very hard to read in this regard;
> I'm thinking of changing it all to report time in double seconds, any
> opinions on that ? It's especially hard to read now that the clocks
> internally use "real" time, not "time since start of program")
In my own elements, I'm printing times in seconds with three decimal
positions:
GST_LOG_OBJECT (sink, "new timestamp: %0.3fs",
(double) timestamp / GST_SECOND);
it makes a large difference in terms of legibility. It would be nice to
have a macro or two in gstinfo for that purpose.
> IIRC, pipelines and clocks used to start at 0, so the elements where
> synchronized by default at the start - in theory.
> In practice, due to the fact that going from PAUSED to PLAYING could
> cause elements to need to do quite a bit of processing (for example,
> spider doing autoplugging), this didn't work well, and I believe this is
> what prompted Benjamin to change clocking.
>
> What was happening was this:
> - pipeline gets set from PAUSED to PLAYING
> - all elements thus get set to PLAYING, one by one.
> - suppose an audiosink changes to PLAYING before spider
> - audiosink activates the clock it provides and sets it to start
> counting time
> - spider takes a few seconds to start playing
> - first few buffers arrive at audiosink with timestamp 0, but audiosink
> is already beyond that point, and drops all those buffers
>
> The end result here is that the start of a song got dropped because the
> clock is running before data is flowing.
> (Correct me if I read any of the source wrong in this regard).
I think your description is quite accurate. The problem happens also
after discontinuities. It results in dropped material either on the
sound or on the video side, or on funny behavior of diverse sorts. For
example, the DXR3 card synchronizes video using its own internal clock,
so all you have to do is telling it when it should play a given frame.
Very often, after discontinuities, you see video just running really
fast for a fraction of a second. Sometimes it really matches the general
style of "The Matrix", but other films may not look as nice.
> Now, the two possible solutions I could think of were
> - change the design so that the state "PAUSED" means "do everything
> needed to get the first buffer queued, but blocked, on the rendering
> sink" - ie, going to this state would make the scheduler be certain that
> all links to rendering sinks contain a buffer. Before this might have
> been awkward, now with the negotiation rewrite there actually is a
> concept of a link between pads, so it might be possible to tack this
> concept on it. Personally, I'd prefer this approach, it seems more
> natural to me.
Well, I agree this is an ideal definition of the PAUSED state, being
consistent with what standard playing devices (CD players and the like)
do. I don't see it'd be that easy to implement, though. Think of a chain
based sink going from READY to PAUSED. In order to go to PAUSED it needs
to receive some material, and probably put it in the hardware buffer
(you can do that with ALSA: fill the buffer before playing). Problem is,
it must wait for its chain function to be called before having material
available. So, the change_state function would have to run concurrently
with the chain function, which would be weird.
Does this make sense or I'm missing something?
> - the other solution would be somewhat similar in concept, but maybe
> slightly trickier to get right in code. The idea would be to make sure
> the actual clock doesn't start running until all elements inside the
> pipeline that matter are PLAYING. Since the state of a bin is defined
> to be the highest state of all of its children, you cannot use the state
> of the bin to determine this. What the scheduler can do, however, is
> keep track of the number of non-locked elements that are set to
> PLAYING. Ie, it can see easily if all children that are not locked are
> PLAYING. When all of them are, it can tell the clock to start
> counting. Again, the net effect is that data starts flowing with
> timestamp 0 at the same time the clock starts running.
This looks easier to implement, needing only some interaction between
the clock and the scheduler. It doesn't look as nice conceptually,
though.
> > Timestamps are time values stored in buffers. They are accessible
> > through the GST_BUFFER_TIMESTAMP macro. The timestamp in a buffer
> > tells the time at which the material in the buffer should start to
> > play. [Is this true? I always use the convention that timestamps are
> > associated to the start of the buffer, but I haven't seen it written
> > anywhere.]
>
> It used to be true in the case where clocks started at 0, ie when they
> used "pipeline playing time". Benjamin changed clocks to use system
> time, I'm still not sure what this bought us. His argument is that
> different pipelines were using the same clock which screwed up stuff,
> IIRC (correct me if I'm wrong). Anyway, two pipelines should only use
> the same GstClock if they really need to for some reason anyway - ie, if
> they are supposed to be perfectly synchronized for some reason. What's
> your take ?
Well, clocks actually work like chronometers right now. They are only
useful to time intervals. It is actually irrelevant where they start.
Sharing a clock between pipelines shouldn't be a big deal. However, if
it stops whenever material fails to flow to an audiosink, for example,
it may cause trouble.
My point regarding timestamps was, though, that they may be associated,
for example, to the end of a buffer. That is, they tell you when the
material in the buffer should finish playing. Start of the buffer is,
however, what people seem to use all around.
> I think on a conceptual level it doesn't matter one bit if a clock uses
> system time or "time since start of playback" internally; it can be made
> to work with both, at least in theory. Using system time just makes it
> harder to understand IMO, plus to a pipeline the current system time
> doesn't matter anyway, while time since start of playback has meaning
> for the system. Anyway, just MO.
It really doesn't matter that much. You'd rather not look at the
absolute value of the clock time, but just regard time differences. On
the other hand, if the clock does not start in 0, gst_element_get_time
will start reporting funny values unless you set element time to 0
explicitly when going to the PAUSE state or something. I don't know if
there are any provisions for that in gst_element, but I haven't seen
them so far.
> > The length of time the material should play is, on the
> > other hand, rather determined by the characteristics of the stream
> > (like, for example, a PAL video frame should play for 1/25th of a
> > second).
>
> There's also GST_BUFFER_DURATION, but I didn't check how much elements
> use this. It is important IMO for some applications; take, for example,
> subtitles, where you want to express that the buffer of text you're
> sending is supposed to be shown from the TIMESTAMP for DURATION length.
Ok, I never used it. Would be good to mention in the write up as well.
> There's also the question of what to do when the element moves away from
> PLAYING. In theory an element providing this clock is supposed to
> provide this clock in all states - there is no way for the rest of the
> system to know currently if a clock cannot function anymore.
Hmmm, that's a general problem we have with the semantics of clocks. Do
we really want them to be real time clocks or not? Our typical audio
clock is not a real time one, in the sense that if the sink doesn't get
any sound, the clock stops running. IMO, this shouldn't happen, because,
even it that helps simple pipelines, it may break more complex ones.
> Currently a discont event sends the new time to use as a base time,
> right ? Here's my idea - have the discont event contain both the new
> time to use as base time, as well as the ACTUAL clock time at the moment
> of deciding what base time to use in the discont event. This maps the
> old GstClock time to the new base time elements have to use.
>
> When the discont event then reaches the element, the element can query
> the current clocktime, and add the difference between current time and
> event's clock time to the base time it has to set.
>
> Would this work ?
Yeah, I guess so. But, please look at my other message in this thread
for an alternative, anyway. A couple of issues remain. First, all
elements sending discontinuities will need access to the clock. No big
deal, but this may be more invasive than just changing the sinks.
Second, this links the new element base time to the time the discont
event is sent, which is way before the time the sinks are actually ready
to play the material following it. In other words, this only worsens the
problem we previously discussed.
Enjoy,
M. S.
--
Martin Soto <soto at informatik.uni-kl.de>
More information about the gstreamer-devel
mailing list