[Piglit] [PATCH] glsl-es-3.00: Generate tests for builtin packing functions (v2)

Chad Versace chad.versace at linux.intel.com
Wed Jan 23 18:01:52 PST 2013


On 01/23/2013 08:52 AM, Paul Berry wrote:
> On 21 January 2013 15:24, Chad Versace <chad.versace at linux.intel.com> wrote:
> 
>> +def make_inputs_for_pack_half_2x16():
>> +    # The domain of packHalf2x16 is ([-inf, +inf] + {NaN})^2. The function
>> +    # does not clamp its input.
>> +    #
>> +    # We test both -0.0 and +0.0 in order to stress the implementation's
>> +    # handling of zero.
>> +
>> +    subnormal_min = 2.0**(-14) * (1.0 / 2.0**10)
>> +    subnormal_max = 2.0**(-24) * (1023.0 / 2.0**10)
>> +    normal_min    = 2.0**(-14) * (1.0 + 0.0 / 2.0**10)
>> +    normal_max    = 2.0**15 * (1.0 + 1023.0 / 2.0**10)
>> +    min_step      = 2.0**(-24)
>> +    max_step      = 2.0**5
>> +
>> +    pos = tuple(float32(x) for x in (
>> +        # Inputs that result in 0.0 .
>> +        #
>> +        0.0,
>> +        0.0 + 0.25 * min_step,
>> +
>> +        # A thorny input...
>> +        #
>> +        # if round_to_even:
>> +        #   f16 := 0.0
>> +        # elif round_to_nearest:
>> +        #    f16 := subnormal_min
>> +        #
>> +        0.0 + 0.50 * min_step,
>> +
>> +        # Inputs that result in a subnormal float16.
>> +        #
>> +        0.0 + 0.75 * min_step,
>> +        subnormal_min + 0.00 * min_step,
>> +        subnormal_min + 0.25 * min_step,
>> +        subnormal_min + 0.50 * min_step,
>> +        subnormal_min + 0.75 * min_step,
>> +        subnormal_min + 1.00 * min_step,
>> +        subnormal_min + 1.25 * min_step,
>> +        subnormal_min + 1.50 * min_step,
>> +        subnormal_min + 1.75 * min_step,
>> +        subnormal_min + 2.00 * min_step,
>> +
>> +        normal_min - 2.00 * min_step,
>> +        normal_min - 1.75 * min_step,
>> +        normal_min - 1.50 * min_step,
>> +        normal_min - 1.25 * min_step,
>> +        normal_min - 1.00 * min_step,
>> +        normal_min - 0.75 * min_step,
>> +
>> +        # Inputs that result in a normal float16.
>> +        #
>> +        normal_min - 0.50 * min_step,
>> +        normal_min - 0.25 * min_step,
>> +        normal_min + 0.00 * min_step,
>> +        normal_min + 0.25 * min_step,
>> +        normal_min + 0.50 * min_step,
>> +        normal_min + 0.75 * min_step,
>> +        normal_min + 1.00 * min_step,
>> +        normal_min + 1.25 * min_step,
>> +        normal_min + 1.50 * min_step,
>> +        normal_min + 1.75 * min_step,
>> +        normal_min + 2.00 * min_step,
>> +
>> +        normal_max - 2.00 * max_step,
>> +        normal_max - 1.75 * max_step,
>> +        normal_max - 1.50 * max_step,
>> +        normal_max - 1.25 * max_step,
>> +        normal_max - 1.00 * max_step,
>> +        normal_max - 0.75 * max_step,
>> +        normal_max - 0.50 * max_step,
>> +        normal_max - 0.25 * max_step,
>> +        normal_max + 0.00 * max_step,
>> +        normal_max + 0.25 * max_step,
>> +
>> +        # Inputs that result in infinity.
>> +        #
>> +        normal_max + 0.50 * max_step,
>> +        normal_max + 0.75 * max_step,
>> +        normal_max + 1.00 * max_step,
>> +
>> +        "+inf",
>> +    ))
>>
> 
> Now that I've had a look at the Mesa implementation, I'd like to suggest
> adding a few values to this list:
> 
> A. 2.0 * normal_min + 0.75 * min_step
> B. 2.5 * normal_min
> C. 0.5
> D. 1.0
> E. 1.5
> F. normal_max + 2.0 * max_step
> 
> Rationale for A: The smallest range of normal float16s (e=1) actually has
> the same precision as subnormals (e=0).  Therefore, if we have a bug that
> causes inputs in the range [normal_min, 2*normal_min] to get misclassified
> as subnormals, there will actually be no bug--your Mesa code will do the
> right thing.  However, if we have a bug that causes inputs even higher than
> 2*normal_min to get misclassified as subnormals, then the first set of
> values that will go wrong will be those in the range (2.0*normal_min +
> 0.5*min_step, 2*normal_min + 1.0*min_step).  These will get incorrectly
> converted to e=2, m=1, when the correct conversion is to e=2, m=0.  So it
> makes sense to drop a test point exactly in the center of this range, at
> 2.0*normal_min + 0.75*min_step.
> 
> Rationale for F: If we have a bug that causes inputs beyond normal_max to
> get misclassified as normals, normal_max + 1.00 * max_step will actually
> get correctly converted to infinity, since it will get represented as e=31,
> m=0.5, which rounds down (thanks to round-to-even behaviour) to e=31, m=0.
> However, values in the range (normal_max + 1.0*max_step, normal_max +
> 3.0*max_step) will get incorrectly converted to e=31, m=1, which is NaN.
> So it makes sense to drop a test point exactly in the center of this range,
> at normal_max + 2.0*max_step.
> 
> Rationale for B-E: It would be nice to test a few values whose mantissas
> and exponents aren't near the corners, just to make sure we haven't missed
> something stupid :)


I added those inputs to the patch and retested on SNB and IVB. Mesa still passes.

Thanks for scrutinizing these corner cases. My biggest fear with the Mesa series
was that it was riddled with corner case bugs because the tests and implementation,
though not sharing identical code, do share identical concepts and the same author.
Now that everything has been scrutinized, I'm confident that things are fairly
bug-free (except for the problematic gen7 vs).



More information about the Piglit mailing list