<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Feb 16, 2015 at 12:02 PM, Francisco Jerez <span dir="ltr"><<a href="mailto:currojerez@riseup.net" target="_blank">currojerez@riseup.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">Jason Ekstrand <<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a>> writes:<br>
<br>
> On Mon, Feb 16, 2015 at 10:40 AM, Francisco Jerez <<a href="mailto:currojerez@riseup.net">currojerez@riseup.net</a>><br>
> wrote:<br>
><br>
>> Jason Ekstrand <<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a>> writes:<br>
>><br>
>> > On Feb 16, 2015 9:34 AM, "Francisco Jerez" <<a href="mailto:currojerez@riseup.net">currojerez@riseup.net</a>><br>
>> wrote:<br>
>> >><br>
>> >> Jason Ekstrand <<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a>> writes:<br>
>> >><br>
>> >> > On Feb 16, 2015 8:35 AM, "Francisco Jerez" <<a href="mailto:currojerez@riseup.net">currojerez@riseup.net</a>><br>
>> > wrote:<br>
>> >> >><br>
>> >> >> The round-robin allocation strategy is expected to decrease the<br>
>> amount<br>
>> >> >> of false dependencies created by the register allocator and give the<br>
>> >> >> post-RA scheduling pass more freedom to move instructions around. On<br>
>> >> >> the other hand it has the disadvantage of increasing fragmentation<br>
>> and<br>
>> >> >> decreasing the number of equally-colored nearby nodes, what increases<br>
>> >> >> the likelihood of failure in presence of optimistically colorable<br>
>> >> >> nodes.<br>
>> >> >><br>
>> >> >> This patch disables the round-robin strategy for optimistically<br>
>> >> >> colorable nodes. These typically arise in situations of high<br>
>> register<br>
>> >> >> pressure or for registers with large live intervals, in both cases<br>
>> the<br>
>> >> >> task of the instruction scheduler shouldn't be constrained<br>
>> excessively<br>
>> >> >> by the dense packing of those nodes, and a spill (or on Intel<br>
>> hardware<br>
>> >> >> a fall-back to SIMD8 mode) is invariably worse than a slightly less<br>
>> >> >> optimal scheduling.<br>
>> >> ><br>
>> >> > Actually, that's not true. Matt was doing some experiments recently<br>
>> > with a<br>
>> >> > noise shader from synmark and the difference between our 2nd and 3rd<br>
>> > choice<br>
>> >> > schedulers is huge. In that test he disabled the third choice<br>
>> scheduler<br>
>> >> > and the result was a shader that spilled 6 or 8 times but ran<br>
>> something<br>
>> >> > like 30% faster. We really need to do some more experimentation with<br>
>> >> > scheduling and figure out better heuristics than "SIMD16 is always<br>
>> > faster"<br>
>> >> > and "spilling is bad".<br>
>> >> ><br>
>> >><br>
>> >> Yes, I'm aware of rare corner cases like that where e.g. SIMD16 leads to<br>
>> >> higher cache thrashing than SIMD8 leading to decreased overall<br>
>> >> performance, and a case where a shader SIMD16 *with* spills has better<br>
>> >> performance than the SIMD8 version of the same shader without spills.<br>
>> >><br>
>> >> In any case it's not the register allocator's business to implement such<br>
>> >> heuristics, and that's not an argument against the register allocator<br>
>> >> trying to make a more efficient use of the register file.<br>
>> ><br>
>> > The primary point I was trying to make is that scheduling *does* matter.<br>
>> > It matters a lot. In fact, Matt and i have talked about throwing away<br>
>> the<br>
>> > SIMD16 program if it ends up using the pessimal schedulong algorithm.<br>
>> > Throwing scheduling to the wind just to gain a few SIMD16 programs is<br>
>> > probably not a good trade-off.<br>
>> ><br>
>> In my experience the exact opposite observation has been far more<br>
>> common. Running SIMD16 vs SIMD8 has a larger impact on performance than<br>
>> the way you end up scheduling things post-regalloc. Actually even if<br>
>> you end up causing some unmet instruction dependencies by the way<br>
>> instructions are scheduled post-regalloc, the EU can context-switch to<br>
>> service the next available thread almost for free when a thread stalls<br>
>> on some dependency. Also the fact that you're doing SIMD16 itself makes<br>
>> post-regalloc scheduling less important because it naturally has an<br>
>> effect in hiding latency.<br>
>><br>
>> My intuition is that the huge performance improvement Matt observed by<br>
>> disabling the third scheduling heuristic is more likely to have been<br>
>> caused by a decrease in the amount of cache thrashing caused by the fact<br>
>> that he was running less channels concurrently rather than by the<br>
>> scheduling heuristic itself. Matt, did you rule out that possibility?<br>
>><br>
>> The other thing is this patch has an effect on the allocation strategy<br>
>> for optimistically colorable nodes *only*. We're already heavily<br>
>> constrained by register pressure when we get to that point, and assuming<br>
>> allocation succeeds the post-regalloc scheduler is going to have little<br>
>> room for maneuvering anyway.<br>
>><br>
>> > It could be that this is an good idea, but it's going to take more than<br>
>> > hand-waved theories about register allocation one shader not spilling to<br>
>> > convince me. Do you actually know what it did to scheduling? It<br>
>> wouldn't<br>
>> > be hard to hack up the driver and shader-db to collect that information.<br>
>> ><br>
>> 44 shaders going SIMD16 seems like a strong enough argument to me.<br>
>> Could you be more precise about what additional information you want me<br>
>> to collect?<br>
>><br>
><br>
> How many shaders go from the first scheduling method to the second or to<br>
> the third. In other words some sort of metric on which shaders are<br>
> "helped" or "hurt" in their scheduling.<br>
<br>
</div></div>OK, I hacked the driver to output the scheduling heuristic that had been<br>
used when we allocated registers successfully for the program via<br>
KHR_debug and then ran shader-db before and after applying this patch.<br>
<br>
Before this patch:<br>
Heuristic SIMD8 SIMD16<br>
PRE 18924 18598<br>
PRE_NON_LIFO 72 38<br>
PRE_LIFO 8 3<br>
<br>
After this patch:<br>
Heuristic SIMD8 SIMD16<br>
PRE 18939 18643<br>
PRE_NON_LIFO 57 37<br>
PRE_LIFO 8 3<br>
<br>
So it actually *decreases* the number of shaders falling back to the<br>
latency-insensitive heuristics because the register allocator is more<br>
likely to succeed with the PRE heuristic.<br><div class="HOEnZb"><div class="h5"></div></div></blockquote><div><br></div><div>Yeah, I guess it only affects post-allocation scheduling and accidental dependencies (caused by two instructions happening to use the same register). Sorry for not thinking about that earlier. In that case, I have no problem with the patch but I can't say I've reviewed it either. I'll try and take a look at it tomorrow and do a real review.<br></div><div>--Jason<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
><br>
><br>
>> > --Jason<br>
>> ><br>
>> >> >> Shader-db results on the i965 driver:<br>
>> >> >><br>
>> >> >> total instructions in shared programs: 5488539 -> 5488489 (-0.00%)<br>
>> >> >> instructions in affected programs: 1121 -> 1071 (-4.46%)<br>
>> >> >> helped: 1<br>
>> >> >> HURT: 0<br>
>> >> >> GAINED: 49<br>
>> >> >> LOST: 5<br>
>> >> >> ---<br>
>> >> >> src/util/register_allocate.c | 22 +++++++++++++++++++++-<br>
>> >> >> 1 file changed, 21 insertions(+), 1 deletion(-)<br>
>> >> >><br>
>> >> >> diff --git a/src/util/register_allocate.c<br>
>> > b/src/util/register_allocate.c<br>
>> >> >> index af7a20c..d63d8eb 100644<br>
>> >> >> --- a/src/util/register_allocate.c<br>
>> >> >> +++ b/src/util/register_allocate.c<br>
>> >> >> @@ -168,6 +168,12 @@ struct ra_graph {<br>
>> >> >><br>
>> >> >> unsigned int *stack;<br>
>> >> >> unsigned int stack_count;<br>
>> >> >> +<br>
>> >> >> + /**<br>
>> >> >> + * Tracks the start of the set of optimistically-colored<br>
>> registers<br>
>> > in<br>
>> >> > the<br>
>> >> >> + * stack.<br>
>> >> >> + */<br>
>> >> >> + unsigned int stack_optimistic_start;<br>
>> >> >> };<br>
>> >> >><br>
>> >> >> /**<br>
>> >> >> @@ -454,6 +460,7 @@ static void<br>
>> >> >> ra_simplify(struct ra_graph *g)<br>
>> >> >> {<br>
>> >> >> bool progress = true;<br>
>> >> >> + unsigned int stack_optimistic_start = ~0;<br>
>> >> >> int i;<br>
>> >> >><br>
>> >> >> while (progress) {<br>
>> >> >> @@ -483,12 +490,16 @@ ra_simplify(struct ra_graph *g)<br>
>> >> >><br>
>> >> >> if (!progress && best_optimistic_node != ~0U) {<br>
>> >> >> decrement_q(g, best_optimistic_node);<br>
>> >> >> + stack_optimistic_start =<br>
>> >> >> + MIN2(stack_optimistic_start, g->stack_count);<br>
>> >> >> g->stack[g->stack_count] = best_optimistic_node;<br>
>> >> >> g->stack_count++;<br>
>> >> >> g->nodes[best_optimistic_node].in_stack = true;<br>
>> >> >> progress = true;<br>
>> >> >> }<br>
>> >> >> }<br>
>> >> >> +<br>
>> >> >> + g->stack_optimistic_start = stack_optimistic_start;<br>
>> >> >> }<br>
>> >> >><br>
>> >> >> /**<br>
>> >> >> @@ -542,7 +553,16 @@ ra_select(struct ra_graph *g)<br>
>> >> >> g->nodes[n].reg = r;<br>
>> >> >> g->stack_count--;<br>
>> >> >><br>
>> >> >> - if (g->regs->round_robin)<br>
>> >> >> + /* Rotate the starting point except for optimistically<br>
>> colorable<br>
>> >> > nodes.<br>
>> >> >> + * The likelihood that we will succeed at allocating<br>
>> > optimistically<br>
>> >> >> + * colorable nodes is highly dependent on the way that the<br>
>> > previous<br>
>> >> >> + * nodes popped off the stack are laid out. The round-robin<br>
>> >> > strategy<br>
>> >> >> + * increases the fragmentation of the register file and<br>
>> > decreases<br>
>> >> > the<br>
>> >> >> + * number of nearby nodes assigned to the same color, what<br>
>> >> > increases the<br>
>> >> >> + * likelihood of spilling with respect to the dense packing<br>
>> >> > strategy.<br>
>> >> >> + */<br>
>> >> >> + if (g->regs->round_robin &&<br>
>> >> >> + g->stack_count <= g->stack_optimistic_start)<br>
>> >> >> start_search_reg = r + 1;<br>
>> >> >> }<br>
>> >> >><br>
>> >> >> --<br>
>> >> >> 2.1.3<br>
>> >> >><br>
>> >> >> _______________________________________________<br>
>> >> >> mesa-dev mailing list<br>
>> >> >> <a href="mailto:mesa-dev@lists.freedesktop.org">mesa-dev@lists.freedesktop.org</a><br>
>> >> >> <a href="http://lists.freedesktop.org/mailman/listinfo/mesa-dev" target="_blank">http://lists.freedesktop.org/mailman/listinfo/mesa-dev</a><br>
>><br>
</div></div></blockquote></div><br></div></div>