[Pixman] RFC: Pixman benchmark CPU time measurement

Ben Avison bavison at riscosopen.org
Wed Jun 3 09:47:47 PDT 2015


On Wed, 03 Jun 2015 08:51:25 +0100, Pekka Paalanen <ppaalanen at gmail.com>
wrote:
> If we fixed gettime() for clock() wraparounds rather than ignoring them,
> I suppose there would be no reason to have gettimei().
>
> Ben?

Well, that would help, but I still don't like the idea of using floating
point in the middle of benchmarking loops when an integer version works
just as well and floating point doesn't really gain you anything. Even the
division by 1000000 is nearly always undone by the time we reach the
printf because megapixels per second or megabytes per second are practical
units - and those are the same things as pixels or bytes per microsecond.
Nobody is currently doing more to the times than adding or subtracting
them.

I know they're increasingly rare these days, but a machine with no
hardware FPU might take an appreciable time to do the integer-floating
point conversion and floating point maths. Even if you have an FPU, it
might be powered down on each context switch and only have its state
restored lazily on the first floating point instruction encountered,
resulting in a random timing element. In both cases this can be avoided
by sticking to integers.

Ben


More information about the Pixman mailing list