[Mesa-dev] [PATCH 0/3] tgsi, radeonsi: add CANON opcode for float canonicalization
Roland Scheidegger
sroland at vmware.com
Mon Sep 18 17:52:08 UTC 2017
Am 18.09.2017 um 19:11 schrieb Roland Scheidegger:
> Am 18.09.2017 um 17:36 schrieb Nicolai Hähnle:
>> On 18.09.2017 17:02, Roland Scheidegger wrote:
>>> This looks like a horrendous solution which will break the world - well
>>> for us :-). Because integers simply will cease to work, always flushed
>>> to zero (bye bye loop counter...).
>>> The reason is that when you translate from something with a untyped
>>> register file to something typed, the obvious solution is to store
>>> everything as floats, and cast to int/uint as needed (if you'd translate
>>> from tgsi back to glsl, you'd probably do it that way as well).
>>> Hence, you must not flush denorms on float to int/uint casts - which,
>>> btw, is also illegal by glsl as far as I can tell ("Returns a signed or
>>> unsigned integer value representing the encoding of a floating-point
>>> value. The floating-point value's bit-level representation is
>>> preserved.")
>>
>> How could you possibly know that the value was a denorm to begin with?
>> "Any denormalized value input into a shader or potentially generated by
>> any operation in a shader can be flushed to 0.", which presumably
>> includes intBitsToFloat :-)
> I guess the answer is we sort of know the gpus and their drivers won't
> do anything crazy :-). In the end, they'll all use untyped registers,
> and they do need to work with dx10 in any case, so we just rely on this
> (in particular the bitcasts being no-ops)... Of course, there might be
> issues with this since it's not quite guaranteed by glsl (for instance,
> overzealous optimization using value range tracking could detect some
> integer has to be a negative small number, and upon using i2f cast which
> would yield a NaN just make everything depending on this result
> undefined, even if that value is only used again by a f2i cast).
> GPUs generally don't do any float "data conversion" when just moving
> values around, whether it might be allowed by gl or not.
>
>>
>>
>>> (As a side note, for the same reasons we rely on i2f/u2f "doing the
>>> right thing", not messing with bits, albeit this one isn't guaranteed by
>>> glsl. But I'm quite sure we're not the only ones relying on this, and
>>> this is quite common.)
>>
>> Yeah, fair enough. You've convinced me not to take this approach.
>>
>> We could just flush to zero after min/max. Although this should lead to
>> a different result for min() vs. open-coding the comparison and select.
>> That feels pretty dirty as well...
>>
>>
>>> I don't know what a proper solution would look like though. FWIW d3d10
>>> permits this min/max behavior (the comparison must use denorm flush, but
>>> the result may be denorm flushed or not, so it's completely ok if you
>>> get the "wrong" result. And given the way glsl is specced wrt denorms
>>> (read: mostly undefined) are you sure it's actually illegal there?
>>
>> It's one of those borderline cases for GLSL. The GLSL language is
>> basically what I quoted above, which makes our behavior slightly icky
>> because we flush the input to zero in one place but not another.
>>
>> The GLSL ES 3.10 spec has this bit though:
>>
>> "Should subnormal numbers (also known as 'denorms') be supported?
>>
>> RESOLUTION: No, subnormal numbers maybe flushed to zero at any time."
>>
>> ... which gives much more leeway. Together with the D3D10 behavior, this
>> is a good argument for changing the test. I'm going to look into that.
> One reasoning could also be that it doesn't really matter which number
> you pick in nearly all cases for min/max. Because if you do some
> arithmetic with the result, it will get denorm-flushed then (as long as
> you consistently denorm-flush everything). Exceptions to this are of
> course if you do non-float arithmetic on them using bitcasts but I can't
> imagine anyone really expecting the "correct" result there - especially
> since even if you think picking either denorm isn't correct by glsl
> rules, the result of that can still legally differ to be either a 0 or a
> denorm. Another exception would be shader export, if you don't
> denorm-flush there (or other means to make that value visible outside
> the shader, e.g. image store).
> I guess that if you leave bitcasts alone and only do it on stores, that
> wouldn't really cause problems. Albeit possibly it could cause similar
> issues when you'd try to convert d3d11 assembly (because it least some
> store instructions are really untyped and will need memcpy-like semantics).
>
Actually, reading the spec pedantically, I think this is pretty much
covered:
glsl says "Any denormalized value input into a shader or potentially
generated by any operation in a shader can be flushed to 0." That's
pretty much the same as gles. Albeit it doesn't say "at any time" it
also doesn't really imply any more strict behavior.
And min (max is the same) is
"Returns y if y < x; otherwise it returns x."
Therefore I can't see why you'd have to be consistent in denorm flush
behavior. I'd say the test is a bit too eager to test corner cases and
missed that it is in fact legal (and possibly intentionally so) to
return the "wrong" denorm.
Roland
More information about the mesa-dev
mailing list