# [Mesa-dev] About tolerance calculation on specific (builtin) functions

Connor Abbott cwabbott0 at gmail.com
Wed May 4 17:41:29 UTC 2016

```On Wed, May 4, 2016 at 1:05 PM, Andres Gomez <agomez at igalia.com> wrote:
> Hi,
>
> as part of the work done to "Add FP64 support to the i965 shader
> backends" at:
> https://bugs.freedesktop.org/show_bug.cgi?id=92760
>
> I've been working to add piglit tests that would check the new features
>
> Due to this, I've been checking and making modifications into the
> builtin_functions*py module used by some generators. These modules use
> automatic calculation of the tolerance when distance checking the
> result of a function.
>
> As already stated in the good documentation of the module, the
> tolerance is computed following what it is stated in OpenGL's specs:
>
>     From the OpenGL 1.4 spec (2.1.1 "Floating-Point Computation"):
>
>       "We require simply that numbers' floating-point parts contain
>       enough bits ... so that individual results of floating-point
>       operations are accurate to about 1 part in 10^5."
>
> Although the text is open to interpretation, and for specific
> operations we take a little bit more flexible approach, basically, the
> tolerance is calculated as:
>
> tolerance = <expected_value> / 10⁵
>
> This makes sense since the precision of a floating point value gets
> reduced while the number gets bigger[1].
>
> Following this approach, for a number in the order of 40*10⁵, the
> tolerance used is ~40. While this should be OK for most of the
> functions, it seems to me that such a high tolerance should not be used
> with specific functions, if any tolerance should be used at all.
>
> For example, when testing the "sign()" function, seems pretty obvious
> that using a value of 40 in the tolerance of a function that should
> return either 1.0, 0.0 or -1.0 doesn't make much sense.
>
> A similar case is the "trunc" function and probably others, like
> "floor", "ceil", "abs", etc.
>
> My conclusion is that it should be safe to assume no tolerance in this
> functions and I could modify the algorithm used for them in the python
> module but I wanted to have some feedback in case I'm not taking into
> account something that advices against doing these modifications.
>
> Opinions?

Hi,

If you look at the GLSL 4.40 spec, in section 4.7.1 ("Range and
Precision") you'll find a table listing the precision of various
exact result is correct, as you can see. For doubles, it says "The
precision of double-precision operations is at least that of single
precision." Now, it's up for interpretation whether that means that
they must have the same *absolute* precision or the same ULP's (if the
operation is not exact). For example inversesqrt() is listed at 2 ULP
for single precision, which means that there must be 24 - 2 = 22 bits
of precision. For doubles, are there still 22 bits of precision
required, or is the requirement really that there still be 2 ULP's of
precision in which case there are 53 - 2 = 51 bits of precision. I
wrote the original lowering pass for inversesqrt() and friends
assuming the latter was correct, since it seems like the most sane to
me (or else doubles would have no advantage over floats for anything

Also, you might want to modify the FP64 tests to test the additional
range that doubles provide. Last time I looked at them, they didn't do
that.

Connor

>
> [1] https://en.wikipedia.org/wiki/IEEE_floating_point#Basic_and_interch
> ange_formats
> --
> Br,
>
> Andres
>
>
> _______________________________________________
> mesa-dev mailing list
> mesa-dev at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
>
```