[Mesa-stable] [Mesa-dev] [PATCH] i965/fs: Extend the live ranges of VGRFs which leave loops
Connor Abbott
cwabbott0 at gmail.com
Tue Oct 10 16:39:42 UTC 2017
No, this is a different situation. On i965, the hardware keeps track
of that stuff automatically. The problem is that the header, which is
shared across all the threads in a wavefront and specifies stuff like
LOD, LOD bias, texture array offset, etc., uses the same register
space as normal, vector registers. So, to set it up before we do the
texture, we have to disable the usual exec masking stuff and just
write to certain channels of some random register indiscriminately,
regardless of whether the execution mask is enabled for them. This
then leads to the problems described.
I'm not sure how nvidia handles it, but they probably do something
much more sane like AMD. Although it turns out that you probably have
to deal with this problem anyways when implementing the new subgroup
reduction stuff, since the implementation usually involves some kind
of exec-mask-ignoring write.
On Tue, Oct 10, 2017 at 12:29 PM, Ilia Mirkin <imirkin at alum.mit.edu> wrote:
> I hope I'm not butting in too much with irrelevant info, but I think
> we had a similar issue in nouveau. On Kepler, texture instructions
> take an arbitrary amount of time to complete, and only write into
> destination registers on completion, while other instructions are
> executing after that tex dispatch. [You have to insert a barrier to
> force a wait for the tex to complete.]
>
> We had a clever thing that figured things out based on texture result
> uses and worked for reasonable cases. However the tricky case to
> handle turned out to be
>
> color = texture();
> if (a) {
> use color
> } else {
> dont use color
> }
>
> In that situation, the texture call would randomly overwrite registers
> when we went into the else case, since nothing used the texture
> results and that wasn't properly tracked.
>
> I know what you're going to say - just code-motion the texture into
> the if. But that's not always possible -- the actual original
> situation also added a loops with conditional texture calls to
> complicate matters.
>
> Not sure if this is exactly your situation, but thought I'd point it
> out as it may be relevant.
>
> Cheers,
>
> -ilia
>
> On Tue, Oct 10, 2017 at 12:16 PM, Connor Abbott <cwabbott0 at gmail.com> wrote:
>> I'm a little nervous about this, because really, the only solution to
>> this problem is to ignore all non-WE_all definitions of all variables
>> in liveness analysis. For example, in something like:
>>
>> vec4 color2 = ...
>> if (...) {
>> color2 = texture();
>> }
>>
>> texture() can also overwrite inactive channels of color2. We happen to
>> get this right because we turn live ranges into live intervals without
>> holes, but I can't come up with a good reason why that would save us
>> in all cases except the one in this patch -- which makes me worry that
>> we'll find yet another case where there's a similar problem. I think
>> it would be clearer if we what I said above, i.e. ignore all
>> non-WE_all definitions, which will make things much worse, but then
>> apply Curro's patch which will return things to pretty much how they
>> were before, except this case will be fixed and maybe some other cases
>> we haven't thought of.
>>
>>
>>
>> On Thu, Oct 5, 2017 at 2:52 PM, Jason Ekstrand <jason at jlekstrand.net> wrote:
>>> No Shader-db changes.
>>>
>>> Cc: mesa-stable at lists.freedesktop.org
>>> ---
>>> src/intel/compiler/brw_fs_live_variables.cpp | 55 ++++++++++++++++++++++++++++
>>> 1 file changed, 55 insertions(+)
>>>
>>> diff --git a/src/intel/compiler/brw_fs_live_variables.cpp b/src/intel/compiler/brw_fs_live_variables.cpp
>>> index c449672..380060d 100644
>>> --- a/src/intel/compiler/brw_fs_live_variables.cpp
>>> +++ b/src/intel/compiler/brw_fs_live_variables.cpp
>>> @@ -223,6 +223,61 @@ fs_live_variables::compute_start_end()
>>> }
>>> }
>>> }
>>> +
>>> + /* Due to the explicit way the SIMD data is handled on GEN, we need to be a
>>> + * bit more careful with live ranges and loops. Consider the following
>>> + * example:
>>> + *
>>> + * vec4 color2;
>>> + * while (1) {
>>> + * vec4 color = texture();
>>> + * if (...) {
>>> + * color2 = color * 2;
>>> + * break;
>>> + * }
>>> + * }
>>> + * gl_FragColor = color2;
>>> + *
>>> + * In this case, the definition of color2 dominates the use because the
>>> + * loop only has the one exit. This means that the live range interval for
>>> + * color2 goes from the statement in the if to it's use below the loop.
>>> + * Now suppose that the texture operation has a header register that gets
>>> + * assigned one of the registers used for color2. If the loop condition is
>>> + * non-uniform and some of the threads will take the and others will
>>> + * continue. In this case, the next pass through the loop, the WE_all
>>> + * setup of the header register will stomp the disabled channels of color2
>>> + * and corrupt the value.
>>> + *
>>> + * This same problem can occur if you have a mix of 64, 32, and 16-bit
>>> + * registers because the channels do not line up or if you have a SIMD16
>>> + * program and the first half of one value overlaps the second half of the
>>> + * other.
>>> + *
>>> + * To solve this problem, we take any VGRFs whose live ranges cross the
>>> + * while instruction of a loop and extend their live ranges to the top of
>>> + * the loop. This more accurately models the hardware because the value in
>>> + * the VGRF needs to be carried through subsequent loop iterations in order
>>> + * to remain valid when we finally do break.
>>> + */
>>> + foreach_block (block, cfg) {
>>> + if (block->end()->opcode != BRW_OPCODE_WHILE)
>>> + continue;
>>> +
>>> + /* This is a WHILE instrution. Find the DO block. */
>>> + bblock_t *do_block = NULL;
>>> + foreach_list_typed(bblock_link, child_link, link, &block->children) {
>>> + if (child_link->block->start_ip < block->end_ip) {
>>> + assert(do_block == NULL);
>>> + do_block = child_link->block;
>>> + }
>>> + }
>>> + assert(do_block);
>>> +
>>> + for (int i = 0; i < num_vars; i++) {
>>> + if (start[i] < block->end_ip && end[i] > block->end_ip)
>>> + start[i] = MIN2(start[i], do_block->start_ip);
>>> + }
>>> + }
>>> }
>>>
>>> fs_live_variables::fs_live_variables(fs_visitor *v, const cfg_t *cfg)
>>> --
>>> 2.5.0.400.gff86faf
>>>
>>> _______________________________________________
>>> mesa-dev mailing list
>>> mesa-dev at lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
>> _______________________________________________
>> mesa-dev mailing list
>> mesa-dev at lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
More information about the mesa-stable
mailing list