[Mesa-dev] Performance increase with debug option to flush batchbuffer

Peter Clifton pcjc2 at cam.ac.uk
Mon Oct 11 14:33:10 PDT 2010


Hi All,

I'm currently trying to squeeze some badly needed extra performance from
an open source circuit board layout design package which I've ported to
use GL for rendering.

My laptop has an Intel GM45 chipset, and I noticed an odd side-effect of
one of the debugging options, "Enable flushing batchbuffer after each
drawing call", is that it yields an increase in performance. I was
actually expecting a decrease, and was expecting to see the GPU
rendering time appear in my system wide profiling under the graphics
calls which incurred them. (Perhaps I misunderstand how / whether a
batchbuffer flush would cause GPU / CPU synchronisation).

With this option set, according to intel_gpu_top, the ring buffer spends
a shorter percentage of time idle, and seemingly, even in the case where
I cache a display list and repeatedly render it, (where the ring-buffer
idle was ~0%), there was an increase in frames per second.

Does this point to some deadlock / synchronisation issue between the GPU
and CPU? (Which starting rendering early due to the batchbuffer flush
resolves?)

For all these test I've had vblank syncing disabled, and was dealing
with under about 20fps render rate. (Not stupid numbers). The fastest
(simple) test case I have runs at about 120fps with a displaylist
alleviating the majority of the CPU load.

Any light people could shed would be much appreciated.


Best regards,


-- 
Peter Clifton

Electrical Engineering Division,
Engineering Department,
University of Cambridge,
9, JJ Thomson Avenue,
Cambridge
CB3 0FA

Tel: +44 (0)7729 980173 - (No signal in the lab!)
Tel: +44 (0)1223 748328 - (Shared lab phone, ask for me)



More information about the mesa-dev mailing list