[Mesa-dev] [PATCH 1/2] [RFC] i965/vec4: Reward spills in if/else/endif blocks
ben at bwidawsk.net
Fri Jun 19 20:49:15 PDT 2015
On Fri, Jun 19, 2015 at 08:04:51PM -0700, Matt Turner wrote:
> On Fri, Jun 19, 2015 at 6:53 PM, Connor Abbott <cwabbott0 at gmail.com> wrote:
> > I don't think this is doing what you think it's doing. This code is
> > for calculating the *cost* of spills, so a higher cost means a lower
> > priority for choosing the register. We increase the cost for things
> > inside loops because we don't want to spill inside loops, and by doing
> > the same thing for if's you're actually discouraging spills inside an
> > if block.
> Top quoting is bad, m'kay.
> But, I think it is doing what he thinks since he increases costs for
> ENDIF and decreases costs for IF. That is, it's backwards from
> Why this is a good thing to do... I don't know. I'd expect some data
> along with this patch in order to evaluate it properly.
Well, I think the theory was described in the patch, so I'm not sure if you're
disagreeing with the theory, or you missed the theory (you spill less of the
time because you don't always take both branches).
As for data... I made the patch RFC for a reason :-). I noticed a lot of the
previous spilling related patches used shader-db as a measure, however, I don't
think that's a good measure for spills in many cases (do/while is exactly such
an example). As I mentioned in the commit as well, there are certainly cases
where I could see shader size increasing, but not actual execution time. So if
there are real benchmarks I can run, which spill, I am happy to do that - but I
don't see any value in me spending time doing anything else. I see shader-db as
a good thing to run to make sure it doesn't blow up every test, and that's about
all. I'm content to leave this as an RFC indefinitely. I'm under the impression
optimizing the spill cases aren't super critical anyway.
More information about the mesa-dev