<div dir="ltr"><div>I would have the first call return 0.0 and all the others return the difference between current time and when that first call was done. Then there is no worry about accuracy of floating point. I do not think any callers are interested in the absolute time, only in subtracting two results to get an elapsed time.<br><br></div>Not sure if cpu time is what the benchmarks want. This does not include blocking waiting for the X server or for the GPU or for reading files. Elapsed real time is probably more useful.<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jun 2, 2015 at 9:03 AM, Ben Avison <span dir="ltr"><<a href="mailto:bavison@riscosopen.org" target="_blank">bavison@riscosopen.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Tue, 02 Jun 2015 08:32:35 +0100, Pekka Paalanen <<a href="mailto:ppaalanen@gmail.com" target="_blank">ppaalanen@gmail.com</a>> wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
most pixman performance benchmarks currently rely on gettime() from<br>
test/util.[ch]:<br>
- lowlevel-blt-bench<br>
- prng-test<br>
- radial-perf-test<br>
- scaling-bench<br>
<br>
Furthermore, affine-bench has its own gettimei() which is essentially<br>
gettime() but with uin32_t instead of double.<br>
</blockquote>
<br></span>
For what it's worth, here's my opinion. I'll sidestep the issue of<br>
*which* underlying system clock is read for now, and look at data types.<br>
<br>
It certainly makes more sense to use doubles than floats for holding<br>
absolute times. As of 2005-09-05 05:58:26 UTC, the number of microseconds<br>
elapsed since 1970-01-01 00:00:00 UTC has been expressable as a 51-bit<br>
integer. The next time that changes will be 2041-05-10 11:56:53 UTC, when<br>
that goes up to a 52-bit integer.<br>
<br>
IEEE double-precision floating point numbers use a 52-bit mantissa, so<br>
they are capable of expressing all 51- and 52-bit integers without any<br>
loss of precision. In fact, we don't lose precision until we reach 54-bit<br>
integers (because the mantissa excludes the implicit leading '1' bit):<br>
after 2255-06-05 23:47:34 UTC the times would start being rounded to an<br>
even number of microseconds.<br>
<br>
With only 23 mantissa bits in single-precision, times would currently<br>
be rounded with a granularity of over 2 minutes - unworkable for most<br>
purposes.<br>
<br>
Even dividing by 10000000, as gettime() does, is fairly harmless with<br>
double-precision floating point - all you're really doing is subtracting<br>
20 from the exponent and adding a few multiples of the upper bits of the<br>
mantissa into the lower bits.<br>
<br>
But this is ignoring the fact that underneath we're calling<br>
gettimeofday(), which suffers from a perennial problem with clock APIs,<br>
the use of an absolute time expressed as an integer which is liable to<br>
overflow. There are a limited number of transformations you can safely<br>
perform on these - subtracting one from another is notable as a useful<br>
and safe operation (assuming the time interval is less than the maximum<br>
integer expressable, which will normally be the case).<br>
<br>
Assigning the time to a variable of wider type (such as assigning the<br>
long int tv_sec to a uint64_t) is *not* safe, unless you have a reference<br>
example of a nearby time that's already in the wider type, from which you<br>
can infer the most significant bits. There is no provision in the API as<br>
defined to pass in any such reference value, and when gettime() assigns<br>
the time to a double, that's effectively a very wide type indeed because<br>
it can hold the equivalent of an integer over 1000 bits long.<br>
<br>
Assuming 'long int' continues to be considered to be a signed 32-bit<br>
number, as it usually is for today's compilers, tv_sec will suffer signed<br>
overflow on 2038-01-19 03:14:08 UTC, which will hit long before we start<br>
losing precision for doubles. That's only 23 years away now, still within<br>
the careers of many of today's engineers.<br>
<br>
Dividing an integer absolute time is also no good, because differing<br>
values of the overflowed upper bits would completely scramble all the<br>
lower bits. gettimei() gets away with it in the #ifndef HAVE_GETTIMEOFDAY<br>
clause because CLOCKS_PER_SEC is normally 1000000 so the multiplication<br>
and division cancel each other out. Multiplication and addition, on the<br>
other hand, are OK so long as you don't widen the type because the<br>
missing upper bits only affect other missing upper bits in the result -<br>
hence why gettimei() multiplies tv_sec by 1000000 and adds tv_usec. The<br>
output of the function is safe to use to calculate time intervals so long<br>
as the interval doesn't exceed +/- 2^31 microseconds (about 35 minutes).<br>
<br>
If I were to make one change to gettimei() now, looking back, it would be<br>
to make it return int32_t instead. This is because most often you'd be<br>
subtracting two sample outputs of the function, and it's more often<br>
useful to consider time intervals as signed (say if you're comparing the<br>
current time against a timeout time which may be in the past or the<br>
future). If gettimei() returns a signed integer, then C's type promotion<br>
rules make the result of the subtraction signed automatically.<span class="HOEnZb"><font color="#888888"><br>
<br>
Ben</font></span><div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
Pixman mailing list<br>
<a href="mailto:Pixman@lists.freedesktop.org" target="_blank">Pixman@lists.freedesktop.org</a><br>
<a href="http://lists.freedesktop.org/mailman/listinfo/pixman" target="_blank">http://lists.freedesktop.org/mailman/listinfo/pixman</a><br>
</div></div></blockquote></div><br></div>