[Intel-gfx] FPS performance increase when deliberately spinning the CPU with an unrelated task
Peter Clifton
pcjc2 at cam.ac.uk
Sat Oct 23 14:02:35 CEST 2010
Hi guys,
This is something I've noted before, and I think Keith P replied with
some idea of what might be causing it, but I can't recall exactly. I
just thought I'd mention it again in case it struck a chord with anyone.
I'm running my app here, which is on a benchmark test, banging out
frames as fast as the poor thing can manage. It is not CPU bound (it is
using about 50% CPU).
I'm getting 12 fps.
Now I run a devious little test app, "loop", in parallel:
int main( int argc, char **argv )
{
while (1);
}
Re-run the benchmark and I get 19.2 fps. (NICE).
I suspect cpufreq scaling, so I swapped the ondemand governor for
performance.
Strangely:
pcjc2 at pcjc2lap:/sys/devices/system/cpu/cpu1/cpufreq$ cat scaling_available_frequencies
2401000 2400000 1600000 800000
and I only get:
sudo cat cpuinfo_cur_freq
2400000
(Never mind)
Repeat setting for other core of Core2 Duo.
Now, without my "loop" program running, I get 17.6 fps right off.
WITH my "loop" program running, I get 18.2 fps.
I think Keith was thinking that there are some parts of the chipset
which are shared between the GPU and CPU (memory controllers?), and the
CPU entering a lower frequency state could have a detrimental effect on
the graphics throughput.
I know in heavy workloads the CPU is likely to be "a bit" busy, and
rendering will not be totally GPU bound, but it would seem like it is
eventually necessary to have some hook to bump the CPU frequency (or
chipset frequency?) when the GPU would make beneficial use of the extra
throughput.
This doesn't make sense if it is banging out 100fps, but for my stuff,
the GPU is struggling to make 5fps for some complex circuit boards. I'm
trying to address that from a geometry / rendering complexity point of
view, but also, I'd love to see my laptop being able to get the best out
of its hardware.
Perhaps we need to account for periods when the CPU has tasks idle
waiting for GPU operations which would be sped up by increasing some
chip power state.
I'm probably not up to coding this all, but if the idea sounds feasible,
I'd love to know, so I might be able to have a tinker with it.
Best regards,
--
Peter Clifton
Electrical Engineering Division,
Engineering Department,
University of Cambridge,
9, JJ Thomson Avenue,
Cambridge
CB3 0FA
Tel: +44 (0)7729 980173 - (No signal in the lab!)
Tel: +44 (0)1223 748328 - (Shared lab phone, ask for me)
More information about the Intel-gfx
mailing list