<br><br><div class="gmail_quote">On Thu, Jan 17, 2013 at 10:37 AM, Brian Paul <span dir="ltr"><<a href="mailto:brianp@vmware.com" target="_blank">brianp@vmware.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
In compiler.h we define the likely(), unlikely() macros which wrap GCC's __builtin_expect(). But we only use them in a handful of places.<br>
<br>
It seems to me that an obvious place to possibly use these would be for GL error testing. For example, in glDrawArrays():<br>
<br>
if (unlikely(count <= 0)) {<br>
_mesa_error();<br>
}<br>
<br>
Plus, in some of the glBegin/End per-vertex calls such as glVertexAttrib3fARB() where we error test the index parameter.<br>
<br>
I guess the key question is how much might we gain from this. I don't really have a good feel for the value at this level. In a tight inner loop, sure, but the GL error checking is pretty high-level code.<br>
<br></blockquote><div><br></div><div>This is basically a micro-optimization, to be honest. Not that micro-optimization is "bad", but while it should "improve" performance, it would take a lot for that to show up on profiles. In the case of error checking at the start of a function, you might be lucky to save a few cycles -- virtually unnoticeable. </div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I haven't found much on the web about performance gains from __builtin_expect(). Anyone?<br>
<br></blockquote><div><br></div><div>I read a few heresay posts, but this one comes with actual numbers:</div><div><br></div><div><a href="http://blog.man7.org/2012/10/how-much-do-builtinexpect-likely-and.html">http://blog.man7.org/2012/10/how-much-do-builtinexpect-likely-and.html</a></div>
<div><br></div><div>Long story short: if you're wrong, slower; if you're right, marginal improvement.</div><div><br></div><div>It's use is for changing the ordering of jumps from gcc's default of assume linear execution. For example, code like this:</div>
<div>---</div><div>if(A == NULL) //not likely</div><div> return ERR_NULL;</div><div><br></div><div>if(B >= MAX) //not likely</div><div> return ERR_MAX;</div><div><br></div><div>if(C < MIN) //not likely</div><div>
return ERR_MIN;</div><div><br></div><div>doStuff();</div><div>---</div><div><br></div><div>generates jumps around the return statement, so in the normal case, you're making a jump, which can mean you have a delay and possibly refetch instructions. If you didn't jump, then CPU will have the "then" part already loaded in the icache. The "optimal" ordering then is:</div>
<div><br></div><div>if(A != NULL) { </div><div> if(B < MAX) {</div><div> if(C >= MIN) {</div><div> doStuff();</div><div> }</div><div> else return ERR_MIN;</div><div> }</div><div>
else return ERR_MAX;</div><div>}</div><div>else return ERR_NULL;</div><div><br></div><div>---</div><div>In the common case then, the code does not branch, but executes a linear stream of instructions. On modern x86 CPUs, this matters very little, except for maybe a few in-order CPUs (maybe Intel Atom?). You're probably a lot more likely to get some improvements from non-x86 where branch prediction is weaker or unavailable and/or the CPU is in-order. ARM and older SPARC CPUs come to mind. Also, some architectures allow you to encode a branch prediction hint inside of the branch itself, e.g. IA64's </div>
<div>"br.call.sptk.many" Branch / Call / Static Predict Taken / Many Times, which gcc can take advantage of. Still overall, this is well within the realm of micro-optimization.</div><div> </div><div>Patrick</div>
</div>