[Piglit] [PATCH 1/5] glsl-1.30: Test large local array

Glenn Kennard glenn.kennard at gmail.com
Tue Mar 7 19:56:32 UTC 2017


On Tue, 07 Mar 2017 10:34:48 +0100, Alejandro PiƱeiro <apinheiro at igalia.com> wrote:

>
>
> On 07/03/17 00:11, Glenn Kennard wrote:
>> GPUs typically will need to spill/reload values
>> for a large enough locally declared array.
>> ---
>>  .../execution/fs-large-local-array.shader_test     | 43 ++++++++++++++++++++++
>>  1 file changed, 43 insertions(+)
>>  create mode 100644 tests/spec/glsl-1.30/execution/fs-large-local-array.shader_test
>>
>> diff --git a/tests/spec/glsl-1.30/execution/fs-large-local-array.shader_test b/tests/spec/glsl-1.30/execution/fs-large-local-array.shader_test
>> new file mode 100644
>> index 0000000..c050ba2
>> --- /dev/null
>> +++ b/tests/spec/glsl-1.30/execution/fs-large-local-array.shader_test
>> @@ -0,0 +1,43 @@
>> +# Test correct handling of local-scope declared arrays large enough to
>> +# typically not fit into first level GPU memory such as its register file,
>> +# requiring storing/loading to some device-specific scratch space.
>> +#
>> +# One hardware example is R600 where arrays larger than 124*vec4 cannot
>> +# fit into its GPR register file and has to be spilled to scratch memory.
>> +# As of 2017-03-06 this is the largest known register file of any GPU, so
>> +# the test uses that as a size to guarantee some form of spilling on any GPU.
>> +#
>> +# GLSL 1.10 specification loosely implies a minimum limit of 64k elements
>> +# for an array due to 16 bit minimum precision for integer indexes. This
>> +# is higher than R600 scratch supports and likely more than many other gpus
>> +# support in practice, so this test restricts itself to testing "large enough"
>> +# array size.
>> +#
>> +# Later specifications leave this undefined but allow 32 bit integers for indexing.
>> +[require]
>> +GLSL >= 1.30
>> +
>> +[vertex shader passthrough]
>> +
>> +[fragment shader]
>> +uniform uint i;
>> +void main()
>> +{
>> +	uint A[130];
>> +	A[20] = 0u;
>> +	A[i] = 37u;
>> +	gl_FragColor.rba = vec3(0.0, 0.0, 1.0);
>> +	gl_FragColor.g = float(A[20] == 37u);
>
> What's the idea behind this comparison? I guess that at some point of
> your development, when it was not supported, A[20] or A[i] was not
> properly assigned, or the comparison was not working properly. But it is
> somewhat non-intuitive, at least to me.

Hmm, how about:

  + # Test that storing and reading from a large local array returns expected results
...

  +	uint A[130]; // Large enough to require spilling on all known GPUs
  +	A[20] = 0u; // First store a known value to the element we will be reading back
  +	// Store to a compile-time unknown location, which may alias the above value, forcing the compiler to emit the previous store as-is
  +	// The non-constant indirection also inhibits possible splitting or lowering of the array to scalar values by the compiler
  +	A[i] = 37u; // 37 is a arbitrarily chosen value highly unlikely to be found at random in uninitialized scratch space memory
  +	gl_FragColor.rba = vec3(0.0, 0.0, 1.0);
  +	gl_FragColor.g = float(A[20] == 37u); // Read back the value

>
>> +}
>> +

  + # First check that storing to a location next to A[20] doesn't overwrite its initial value

>> +[test]
>> +clear color 1.0 0.0 0.0 1.0
>> +clear
>> +uniform uint i 19
>> +draw rect -1 -1 2 2
>> +probe all rgba 0.0 0.0 0.0 1.0
>> +

  + # Then check that storing to A[20] results in 37 being read back

>> +clear
>> +uniform uint i 20
>> +draw rect -1 -1 2 2
>> +probe all rgba 0.0 1.0 0.0 1.0


More information about the Piglit mailing list