[Bug 97287] GL45-CTS.vertex_attrib_binding.basic-inputL-case1 fails

bugzilla-daemon at freedesktop.org bugzilla-daemon at freedesktop.org
Thu Aug 11 08:31:24 UTC 2016


--- Comment #2 from Antia Puentes <apuentes at igalia.com> ---
Hi Ian,

the problem happens when we declare dvec4 variables in the shader _and_ we set
their size to 2 by calling either to glVertexAttribLFormat or
glVertexAttribLPointer. The same error will happen if we declare in the shader
dvec3 variables and set their size to something smaller than 3. Notice that
only dvec4 or dvec3 variables which size is declared as 1 or 2 will be
problematic because of what I am explaining next (declaring a dvec4 and setting
its size to 3 it will not cause problems).

The current implementation of the ARB_vertex_attrib_64bit extension uses
256-bits to store 4-size and 3-size dvec attributes and 128-bits to store
1-size and 2-size dvec attributes during the vertices emision, for that matter
the size set using the glVertexAttribLFormat or glVertexAttribLPointer is used.
But when assigning which registers of the payload contain the values for each
attribute, we use the information in the shader so we see a dvec4 variable and
we consider that its size its 4, taking more space for it than we should.

In the test we have a dvec4 variable declared in the shader and which size is
later set to 2 using the API, so our dvec4 variable will occupy 128-bits in the
payload but when assigning the registers we will give her 256-bits so it will
be stealing space that corresponds to the next attribute.

Possible fixes that came to my mind:

1. Take into account the size set by the glVertexAttribLFormat or
glVertexAttribLPointer API calls when doing the register assignation. Problem:
currently we only need the shader's information to know in which registers the
values corresponding to each attribute are. The user could call to
glVertexAttribLFormat between different executions so it does not look wise to
follow this approach.
2. Use 256-bits when emitting doubles and dvec2s. I have several patches to
implement the 256-bits option, they are available in
https://github.com/Igalia/mesa/commits/antia/cts-44-vertex-attrib-256bits, they
fix the test but we have regressions in Piglit because of them. The reason of
some failures it is that the urb_read_length becomes too big (bigger than the
limit) now that we upload doubles and dvec2s as 256-bits when emitting the
vertices. When assigning the VS URB setup, we reach the assertion
(urb_read_length <= 15). We need to think on how to solve this:
   a) A way to control that the URB read length does not become too big is by
limiting the number of allowed attributes. The OpenGL specification defines a
maximum number of allowed attributes that can be defined by a Vertex Shader and
allows to count dvec3 and dvec4 as two attributes for that matter. We could
think of doing the same for doubles and dvec2 now that they will occupy
256-bits, however this is not allowed by the spec and also we will be having
linking failures for otherwise perfectly correct shaders.
   b) Use 256-bits only when we know that the 2-sized or 1-sized variable was
declared as a dvec3 or dvec4 in the shader. If I am not wrong the shader
information is not avaible when emitting the vertices.
   c) Other ideas?

You are receiving this mail because:
You are the QA Contact for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/intel-3d-bugs/attachments/20160811/9a100fec/attachment-0001.html>

More information about the intel-3d-bugs mailing list