[Pixman] RFC: Pixman benchmark CPU time measurement

Bill Spitzak spitzak at gmail.com
Tue Jun 2 15:03:01 PDT 2015


I would have the first call return 0.0 and all the others return the
difference between current time and when that first call was done. Then
there is no worry about accuracy of floating point. I do not think any
callers are interested in the absolute time, only in subtracting two
results to get an elapsed time.

Not sure if cpu time is what the benchmarks want. This does not include
blocking waiting for the X server or for the GPU or for reading files.
Elapsed real time is probably more useful.


On Tue, Jun 2, 2015 at 9:03 AM, Ben Avison <bavison at riscosopen.org> wrote:

> On Tue, 02 Jun 2015 08:32:35 +0100, Pekka Paalanen <ppaalanen at gmail.com>
> wrote:
>
>  most pixman performance benchmarks currently rely on gettime() from
>> test/util.[ch]:
>> - lowlevel-blt-bench
>> - prng-test
>> - radial-perf-test
>> - scaling-bench
>>
>> Furthermore, affine-bench has its own gettimei() which is essentially
>> gettime() but with uin32_t instead of double.
>>
>
> For what it's worth, here's my opinion. I'll sidestep the issue of
> *which* underlying system clock is read for now, and look at data types.
>
> It certainly makes more sense to use doubles than floats for holding
> absolute times. As of 2005-09-05 05:58:26 UTC, the number of microseconds
> elapsed since 1970-01-01 00:00:00 UTC has been expressable as a 51-bit
> integer. The next time that changes will be 2041-05-10 11:56:53 UTC, when
> that goes up to a 52-bit integer.
>
> IEEE double-precision floating point numbers use a 52-bit mantissa, so
> they are capable of expressing all 51- and 52-bit integers without any
> loss of precision. In fact, we don't lose precision until we reach 54-bit
> integers (because the mantissa excludes the implicit leading '1' bit):
> after 2255-06-05 23:47:34 UTC the times would start being rounded to an
> even number of microseconds.
>
> With only 23 mantissa bits in single-precision, times would currently
> be rounded with a granularity of over 2 minutes - unworkable for most
> purposes.
>
> Even dividing by 10000000, as gettime() does, is fairly harmless with
> double-precision floating point - all you're really doing is subtracting
> 20 from the exponent and adding a few multiples of the upper bits of the
> mantissa into the lower bits.
>
> But this is ignoring the fact that underneath we're calling
> gettimeofday(), which suffers from a perennial problem with clock APIs,
> the use of an absolute time expressed as an integer which is liable to
> overflow. There are a limited number of transformations you can safely
> perform on these - subtracting one from another is notable as a useful
> and safe operation (assuming the time interval is less than the maximum
> integer expressable, which will normally be the case).
>
> Assigning the time to a variable of wider type (such as assigning the
> long int tv_sec to a uint64_t) is *not* safe, unless you have a reference
> example of a nearby time that's already in the wider type, from which you
> can infer the most significant bits. There is no provision in the API as
> defined to pass in any such reference value, and when gettime() assigns
> the time to a double, that's effectively a very wide type indeed because
> it can hold the equivalent of an integer over 1000 bits long.
>
> Assuming 'long int' continues to be considered to be a signed 32-bit
> number, as it usually is for today's compilers, tv_sec will suffer signed
> overflow on 2038-01-19 03:14:08 UTC, which will hit long before we start
> losing precision for doubles. That's only 23 years away now, still within
> the careers of many of today's engineers.
>
> Dividing an integer absolute time is also no good, because differing
> values of the overflowed upper bits would completely scramble all the
> lower bits. gettimei() gets away with it in the #ifndef HAVE_GETTIMEOFDAY
> clause because CLOCKS_PER_SEC is normally 1000000 so the multiplication
> and division cancel each other out. Multiplication and addition, on the
> other hand, are OK so long as you don't widen the type because the
> missing upper bits only affect other missing upper bits in the result -
> hence why gettimei() multiplies tv_sec by 1000000 and adds tv_usec. The
> output of the function is safe to use to calculate time intervals so long
> as the interval doesn't exceed +/- 2^31 microseconds (about 35 minutes).
>
> If I were to make one change to gettimei() now, looking back, it would be
> to make it return int32_t instead. This is because most often you'd be
> subtracting two sample outputs of the function, and it's more often
> useful to consider time intervals as signed (say if you're comparing the
> current time against a timeout time which may be in the past or the
> future). If gettimei() returns a signed integer, then C's type promotion
> rules make the result of the subtraction signed automatically.
>
> Ben
>
> _______________________________________________
> Pixman mailing list
> Pixman at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/pixman
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/pixman/attachments/20150602/d864f5f8/attachment.html>


More information about the Pixman mailing list