[Cogl] [PATCH 3/3] Add CoglFrameTimings

Owen Taylor otaylor at redhat.com
Fri Jan 25 09:46:45 PST 2013


On Thu, 2013-01-24 at 19:59 +0000, Robert Bragg wrote:
> On Mon, Jan 21, 2013 at 10:06 PM, Owen Taylor <otaylor at redhat.com> wrote:

> > For me, it's definitely essential to have a correlated scale *and* a
> > correlated absolute position. My main interest is Audio-Video
> > synchronization, and for that, the absolute position is needed.
> 
> I can see that you want to correlate an absolute position in this
> case, but I think it's helpful to clarify that it's the absolute
> position with respect to your media layer's time source. I think this
> discussion would be clearer if we refrained from using the term
> "system time" with the suggestion that, that implies some particular
> canonical time source. A system can support numerous time sources.

Yes, a system has many clocks, but I don't want to lose track of the
fact that presentation occurs at some definite time. We aren't dealing
with relativity here - for any particular short interval of time we can
translate between clocks with reasonable accuracy. (And the accuracy we
care about is not that high - 100us is certainly good enough.)

If we have a "clock" where we can't do that mapping, it's a much less
useful clock.

So the ideal end goal here is to provide the user with sufficient
information to translate the presentation timestamp into the clock of
their choice. There are basically two ways of doing it:

 A) Define our own clock with a get_current_time() function
 B) Use a well-known clock

[...]

> > I'm really not sure what kind of applications you are thinking about
> > that don't need system time correlation. Many applications don't need
> > presentation timestamps *at all*. For those that do, A/V synchronization
> > is likely the most common case. There is certainly fragility and
> > complexity in the mapping of UST values onto system time, but Cogl
> > seems like the right place to bite the bullet and encapsulate that
> > fragility.
> 
> I'm just thinking of typical applications that need to drive
> tweening/ease-in/out like animations that should complete with a given
> duration. Basically most applications doing something a bit fancy with
> their UI would fall into this category. This kind of animation can
> certainly benefit from tracking presentation times so as to predict
> when frames will become visible to a user. For example Clutter's
> current approach of using g_source_get_time() to drive animations
> means that in the common case where _swap_buffers wont block for the
> first swap - when there is a back buffer free to start the next frame
> - then Clutter can end up drawing 2 frames in quick succession using
> two timestamps that are very close together even though those frames
> will likely be presented ~16ms apart in the end. Looking at
> recent,historic presentation times would give one simple way of
> predicting when a frame will become visible and thus how far to
> progress animations.

For this type of computation, I'm currently using the the refresh rate -
it's a lot more reliable than trying to track history.

> A/V synchronization seems like a much more specialized problem in
> comparison, so I wouldn't have considered it the common case, though
> it's certainly an important use case. This is also where I find the
> term "system time" most miss leading, and think it might be clearer to
> be more explicit and refer to a "media time" or "a/v time" since
> conceptually there is no implied relationship between a/v timestamps
> and say g_get_monotonic_time(). You are faced with basically the same
> problem of having to map between a/v time and g_get_monotonic_time()
> as with mapping from ust to g_get_monotonic_time. Using gstreamer as
> an example you have a GstClock which is just another monotonic clock
> with an unknown base. Gstreamer is also designed so the GstClock
> implementation can be replaced, but notably it does provide api to
> query the current time which could be used for offset correlation.
> 
> For the problem of correlating A/V then the g_get_monotonic_time()
> time line is a middle man. Assuming you are using gstreamer then I
> expect what you want in the end is a mapping to a GstClock time line.
> I wonder if adding a cogl_gst_frame_info_get_presentation_time() would
> be more convenient to you? The function could take a GstClock pointer
> so it can query a timestamp for doing the mapping.

I could go into details - but the simple answer is that I'm not so
interested in directly doing A/V synchronization with GST - I'm
interested in getting timestamps in the compositor that can be passed to
clients that can then synchronize with GST (or other media frameworks).
And linking to GST to just get a vtable with get_current_time() doesn't
seem useful.

> >> Even with the recent change to the drm drivers the scale has changed
> >> from microseconds to nanoseconds.
> >
> > Can you give me a code reference for that? I'm not finding that change
> > in the DRM driver sources.
> 
> It looks like I jumped the gun here. I was thinking about commit
> c61eef726a78ae77b6ce223d01ea2130f465fe5c which makes the drm drivers
> query CLOCK_MONOTONIC time instead of gettimeofday for vblank events.
> I was assuming that since gettimeofday reports a timeval in
> microseconds and clock_gettime reports a timespec in nanoseconds that
> now the vblank events would be reporting time stamps in nanoseconds.
> On closer inspection though the drm interface uses a timeval to report
> the time, not a uint64 like I'd imagined, so I think it's actually
> reporting CLOCK_MONOTONIC times but in microseconds instead of
> nanoseconds.
> 
> Something to consider though is that if the Nvidia driver were to
> report CLOCK_MONOTONIC timestamps on Linux then it may report those in
> nanoseconds so our heuristics in Cogl for detecting CLOCK_MONOTONIC
> may need updating to consider both cases.
> 
> >
> >> I would suggest that if we aren't sure what timesource the driver is
> >> using then we should not attempt to do any kind of mapping.
> >
> > I'm fine with that - basically I'll do whatever I need to do to get that
> > knowledge working on the platforms that I care about (which is basically
> > Linux with open source or NVIDIA drivers), and where I don't care, I
> > don't care.
> 
> It could be good to get some input from Nvidia about how they report
> UST values from their driver.

I'll do some investigation / asking around abut this.

> >> > * If we start having other times involved, such as the frame
> >> >   time, or perhaps in the future the predicted presentation time
> >> >   (I ended up needing to add this in GTK+), then I think the idea of
> >> >   parallel API's to either get a raw presentation timestamp or one
> >> >   in the timescale of g_get_monotonic_time() would be quite clunky.
> >> >
> >> >   To avoid a build-time dependency on GLib, what makes sense to me is to
> >> >   return timestamps in terms of g_get_monotonic_time() if built against
> >> >   GLib and in some arbitrary timescale otherwise.
> >>
> >> With my current doubts and concerns about the idea of mapping to the
> >> g_get_monotonic_time() timescale I think we should constrain ourselves
> >> to only guarantee the scale of the presentation timestamps to being in
> >> nanoseconds, and possibly monotonic. I say nanoseconds since this is
> >> consistent with how EGL defines UST values in khrplatform.h and having
> >> a high precision might be useful in the future for profiling if
> >> drivers enable tracing the micro progression of a frame through the
> >> GPU using the same timeline.
> >>
> >> If we do find a way to address those concerns then I think we can
> >> consider adding a parallel api later with a _glib namespace but I
> >> struggle to see how this mapping can avoid reducing the quality of the
> >> timing information so even if Cogl is built with a glib dependency I'd
> >> like to keep access to the more pristine (and possibly significantly
> >> more accurate) data.
> >
> > I'm not so fine with the lack of absolute time correlation. It seems
> > silly to me to have reverse-engineering code in *both* Mutter and COGL,
> > which is what I'd have to do.
> >
> > Any chance we can make COGL (on Linux) always return a value based
> > on CLOCK_MONOTONIC? We can worry about other platforms at some other
> > time.
> 
> It would be good to hear what you think about having a
> cogl_gst_frame_info_get_presentation_time() instead.

See above.

> Although promising CLOCK_MONOTONIC clarifies the cross-platform issues
> when compared to g_get_monotonic_time() if there are any linux drivers
> that use CLOCK_MONOTONIC_RAW, for example, this could go quite badly
> wrong. At least if we constrain the points where we do offset mapping
> to times when we really need it (such as for a/v synchronization) then
> we minimize the impact if the mapping can't be done accurately or if
> it breaks monotonicity.

A driver that used CLOCK_MONOTONIC_RAW would be misunderstanding
CLOCK_MONOTONIC_RAW ... it's there so that code for ntpd doesn't get a
system time that is adjusted by the frequency adjustments that it itself
is making. We don't have access to it, but at the DRM level there's a
DRM_CAP_TIMESTAMP_MONOTONIC that promises the current method if present.

[...]

> It seems like the main issue we still have some disagreements about is
> the g_get_monotonic_time() mapping, but I don't think this has to
> block landing anything at this stage.
> 
> Adding cogl_gst_frame_info_get_presentation_time() or
> cogl_glib_frame_info_get_presentation_time() functions are two
> potential solutions. Before the 1.14 release there is also even the
> possibility of committing to stricter time line guarantees which
> wouldn't be incompatible with more conservative scale guarantees to
> start with.

cogl_glib_frame_info_get_presentation_time() would be OK. There are
various reservations I expressed earlier (like how this scales if we
start keeping many times in the CoglFrameInfo), but in the end, we could
always have cogl_frame_info_get_presentation_time() / 1000.
implementation if we needed to go that way.

- Owen




More information about the Cogl mailing list