[Mesa-dev] About tolerance calculation on specific (builtin) functions

Ilia Mirkin imirkin at alum.mit.edu
Wed May 4 17:48:54 UTC 2016


On Wed, May 4, 2016 at 1:41 PM, Connor Abbott <cwabbott0 at gmail.com> wrote:
> On Wed, May 4, 2016 at 1:05 PM, Andres Gomez <agomez at igalia.com> wrote:
>> Hi,
>>
>> as part of the work done to "Add FP64 support to the i965 shader
>> backends" at:
>> https://bugs.freedesktop.org/show_bug.cgi?id=92760
>>
>> I've been working to add piglit tests that would check the new features
>> added by this addition.
>>
>> Due to this, I've been checking and making modifications into the
>> builtin_functions*py module used by some generators. These modules use
>> automatic calculation of the tolerance when distance checking the
>> result of a function.
>>
>> As already stated in the good documentation of the module, the
>> tolerance is computed following what it is stated in OpenGL's specs:
>>
>>     From the OpenGL 1.4 spec (2.1.1 "Floating-Point Computation"):
>>
>>       "We require simply that numbers' floating-point parts contain
>>       enough bits ... so that individual results of floating-point
>>       operations are accurate to about 1 part in 10^5."
>>
>> Although the text is open to interpretation, and for specific
>> operations we take a little bit more flexible approach, basically, the
>> tolerance is calculated as:
>>
>> tolerance = <expected_value> / 10⁵
>>
>> This makes sense since the precision of a floating point value gets
>> reduced while the number gets bigger[1].
>>
>> Following this approach, for a number in the order of 40*10⁵, the
>> tolerance used is ~40. While this should be OK for most of the
>> functions, it seems to me that such a high tolerance should not be used
>> with specific functions, if any tolerance should be used at all.
>>
>> For example, when testing the "sign()" function, seems pretty obvious
>> that using a value of 40 in the tolerance of a function that should
>> return either 1.0, 0.0 or -1.0 doesn't make much sense.
>>
>> A similar case is the "trunc" function and probably others, like
>> "floor", "ceil", "abs", etc.
>>
>> My conclusion is that it should be safe to assume no tolerance in this
>> functions and I could modify the algorithm used for them in the python
>> module but I wanted to have some feedback in case I'm not taking into
>> account something that advices against doing these modifications.
>>
>> Opinions?
>
> Hi,
>
> If you look at the GLSL 4.40 spec, in section 4.7.1 ("Range and
> Precision") you'll find a table listing the precision of various
> operations. Your intuition about floor(), ceil(), etc. needing the
> exact result is correct, as you can see. For doubles, it says "The
> precision of double-precision operations is at least that of single
> precision." Now, it's up for interpretation whether that means that
> they must have the same *absolute* precision or the same ULP's (if the
> operation is not exact). For example inversesqrt() is listed at 2 ULP
> for single precision, which means that there must be 24 - 2 = 22 bits
> of precision. For doubles, are there still 22 bits of precision
> required, or is the requirement really that there still be 2 ULP's of
> precision in which case there are 53 - 2 = 51 bits of precision. I
> wrote the original lowering pass for inversesqrt() and friends
> assuming the latter was correct, since it seems like the most sane to
> me (or else doubles would have no advantage over floats for anything
> except addition and multiplication).

All of this is added by ARB_shader_precision (part of GLSL 4.10). It
also punts on doubles, good to know that the latest specs have
maintained the status quo. One interpretation of it is "you can
implement doubles with float ops". Another is that the ULP's apply to
doubles as well.

Prior to that ext, the precision of everything was to 10^-5.

  -ilia


More information about the mesa-dev mailing list