[Pixman] RFC: Pixman benchmark CPU time measurement
Pekka Paalanen
ppaalanen at gmail.com
Thu Jun 4 01:41:02 PDT 2015
On Wed, 03 Jun 2015 17:47:47 +0100
"Ben Avison" <bavison at riscosopen.org> wrote:
> On Wed, 03 Jun 2015 08:51:25 +0100, Pekka Paalanen <ppaalanen at gmail.com>
> wrote:
> > If we fixed gettime() for clock() wraparounds rather than ignoring them,
> > I suppose there would be no reason to have gettimei().
> >
> > Ben?
>
> Well, that would help, but I still don't like the idea of using floating
> point in the middle of benchmarking loops when an integer version works
> just as well and floating point doesn't really gain you anything. Even the
> division by 1000000 is nearly always undone by the time we reach the
> printf because megapixels per second or megabytes per second are practical
> units - and those are the same things as pixels or bytes per microsecond.
> Nobody is currently doing more to the times than adding or subtracting
> them.
>
> I know they're increasingly rare these days, but a machine with no
> hardware FPU might take an appreciable time to do the integer-floating
> point conversion and floating point maths. Even if you have an FPU, it
> might be powered down on each context switch and only have its state
> restored lazily on the first floating point instruction encountered,
> resulting in a random timing element. In both cases this can be avoided
> by sticking to integers.
That is all the reason to rewrite gettime() in terms of integers then.
If gettime() internally stored its own epoch and returned times
starting from the first call, it'd be less likely to cause integer
overflows in callers. With integers though, unless using 64-bit, we'd
have to pick the resolution and wraparound time when designing the API.
Printing 64-bit values is a bit of a hassle with PRId64 etc.
I think having different timing functions for different tests is
unexpected in any case, and IMHO slightly worse than using 'double'.
I agree with Siarhei in that 'double' is very convenient for
calculations and printing. And you both have proved, that for our uses,
there are no precision issues.
The same arguments that invalidate my proposal to use more accurate
timing functions can be used to make the floating point usage not
matter.
At some point I might propose a patch to fix gettime() internally by
returning times starting from zero, and remove gettimei().
Thanks,
pq
More information about the Pixman
mailing list