Xserver and Gettimeofday

Brice Goglin brice.goglin at gmail.com
Tue Aug 28 23:51:34 PDT 2007


Lukas Hejtmanek wrote:
> Hello,
>
> I was playing with some HD streaming and I noticed that XV overlays highly
> utilize gettimeofday (in particular nvidia closed source driver, but the open
> source one is even worse) resulting in up to 50% CPU usage spent in kernel in
> clock_gettime and context switches. 
>
> Is there any possible solution for this? I guess that it is just stupid driver
> architecture that iterates over gettimeofday instead of waiting for IRQ.
>   


In OLS 2006, Dave Jones (in his famous talk about why user-space sucks)
complained about X calling gettimeofday too often (and gettimeofday
being expensive). Things like mmap'ing /dev/rtc were proposed but didn't
get merged in Linux in the end. There's a new timerfd syscall in 2.6.22
which enables blocking on a file descriptor until a timer expires, I
don't know whether it could help for your problem.

Brice




More information about the xorg mailing list