RFC: Change OML_sync_control UST to CLOCK_MONOTONIC

Mario Kleiner mario.kleiner at tuebingen.mpg.de
Thu Jun 14 13:50:06 PDT 2012


> 
> Message: 2
> Date: Thu, 14 Jun 2012 17:19:11 +0000 (UTC)
> From: Joakim Plate <elupus at ecce.se>
> Subject: Re: RFC: Change OML_sync_control UST to CLOCK_MONOTONIC
> To: dri-devel at lists.freedesktop.org
> Message-ID: <loom.20120614T191057-86 at post.gmane.org>
> Content-Type: text/plain; charset=us-ascii
> 
> 
>>> 
>>> From what I can tell, it should be using: ktime_to_ns(ktime_get()) / 1000. 
> Only
>>> issue is that changing it will break any app relying on it being REALTIME 
> clock.
>>> 
>> 
>> App that rely on it being anything special are badly broken and i
>> don't think there is any such app. The specification strongly stress
>> that app should make no assumption about it.
>> 
> 
> While that may be true... Since there is no other API for getting this UST 
> clock, it's somewhat limited in use. Even if i know vsync happened at time X, if 
> don't know what time it is "now" how can i make use of it?
> 
> Spec says: "The Unadjusted System Time (or UST)
>    is a 64-bit monotonically increasing counter that is available
>    throughout the system."
> 
> If across the system, the only API to get to this value is through GLX api, it's 
> rather hard to make use of.
> 
> For example syncing audio to vsync. One need to sync audio output written to 
> audio renderer now, with this clock.
> 
> Also regarding relying on current behavior... Even if this change is made now, 
> there will be a lot of system with the old behaviour. So knowning if the change 
> has been made in a system is crucial for supporting both / not enabling when 
> feature is unreliable.
> 
> /Joakim
> 

According to the spec, CLOCK_MONOTONIC would have been the right choice.

In practice, as far as my experience with using it goes, it doesn't really matter much, as long as you don't manually set a new time for the system clock while such an application is running - something rather infrequent. Very small, slow clock changes, e.g., due to ntp adjustments, shouldn't matter much, because your app will probably not use a returned timestamp to schedule some action very far ahead in time, when error could accumulate due to multiple adjustments, but only to synchronize stuff on a short timescale.

In my toolkit, e.g., i use the OML_sync_control timestamps to correlate and/or synchronize onset of a visual stimulus with audio playback or audio capture, timestamps from keyboard/mouse/whatever... input, time stamped eye movement information from eye trackers, or reception or emission of digital trigger signals to control some research equipment. The whole thing is timing sensitive at the millisecond level, but usually we only use timestamps to schedule ahead a few video refresh cycles, or a few seconds at most, and typical ntp adjustments don't do harm. I'd think that a video player or similar app is a simpler special case when it comes to audio-video sync.

A change now would break apps that strictly relied on CLOCK_REALTIME semantic to synchronize with other modalities, which themselves may use CLOCK_REALTIME / gettimeofday(), although the OML_sync_control spec says that an app shouldn't rely on it. What i do in my software is this:

<https://github.com/kleinerm/Psychtoolbox-3/blob/master/PsychSourceGL/Source/Linux/Base/PsychTimeGlue.c#L263>

Because the offset between gettimeofday and CLOCK_MONOTONIC is something like 40 years, you can reliably find out by comparison if the returned timestamp is from CLOCK_REALTIME or CLOCK_MONOTONIC and just remap to whatever you find convenient. It's not beautiful, but it allows to do the right thing, should the timebase change at some Linux version. I also needed this for audio sync, because at some ALSA version, some (all?) sound drivers just switched from one timebase to the other.

Not sure if this adds anything new to the topic, i just thought i tell you my experience with using it so far.
-mario



More information about the dri-devel mailing list