[Mesa-dev] gallium scaled types

Ian Romanick idr at freedesktop.org
Tue Sep 13 09:18:20 PDT 2011


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 09/12/2011 11:41 AM, Jose Fonseca wrote:
> 
> 
> ----- Original Message -----
>> On Mon, Sep 12, 2011 at 5:48 PM, Roland Scheidegger 
>> <sroland at vmware.com> wrote:
>>> Am 11.09.2011 19:17, schrieb Dave Airlie:
>>>> On Sun, Sep 11, 2011 at 10:11 AM, Dave Airlie
>>>> <airlied at gmail.com> wrote:
>>>>> Hi guys,
>>>>> 
>>>>> not really finding a great explaination in my 2 minute
>>>>> search, of what the USCALED and SSCALED types are
>>>>> representative of
>>>>> 
>>>>> On r600 hw at least we have a SCALED type, which seems to
>>>>> be an integer in float point format, as well as an INT type
>>>>> which is natural integers.
>>>> 
>>>> Talked on irc with calim and mareko, makes sense now, need to
>>>> add UINT/SINT types will document things maybe a bit more on
>>>> my way past.
>>>> 
>>>> will also rename the stencil types.
>>> 
>>> 
>>> Hmm what's wrong with them? USCALED is a unsigned int type
>>> which in contrast to UNORM isn't normalized but "scaled" to the
>>> actual value (so same as UINT really). Same for SSCALED which
>>> is just signed instead of unsigned. And the stencil types seem
>>> to fit already.
>> 
>> No, they are not.
>> 
>> SCALED is an int that is automatically converted to float when 
>> fetched by a shader.
>> 
>> The SCALED types are OpenGL's non-normalized *float* vertex
>> formats that are stored in memory as ints, e.g.
>> glVertexAttribPointer(... GL_INT ...). There are no SCALED
>> textures or renderbuffers supported by any hardware or exposed by
>> any API known to me. Radeons seem to be able to do SCALED types
>> according to the ISA docs, but in practice it only works with
>> vertex formats and only with SCALED8 and SCALED16 (AFAIK).
>> 
>> Then there should be the standard INT types that are not
>> converted to float upon shader reads. Those can be specified as
>> vertices by glVertexAttribIPointer(... GL_INT ...) (note the
>> *I*), or as integer textures. This is really missing in Gallium.
> 
> Pipe formats describe how the data should be interpreted.
> 
> IMO, the type of register they will be stored after interpretation
> is beyond the the scope of pipe_format.  I think that is purely in
> the realm of shaders.
> 
> For example, when doing texture sampling, if
> PIPE_R32G32B32A32_SSCALED should be read into a integer register or
> float registers should be decided by the texture sample opcode.
> Not the pipe_format.
> 
> And in the case of vertex shaders inputs, the desired register type
> (float, int, double) should be not in pipe_vertex_element at all,
> but probably in the shader input declaration. Given that it ties
> more closely with shader itself: an integer vertex input will be
> used usually with integer opcodes, and vice-versa. Independent of
> the actually vertices being stored in the vertex buffer as integers
> or not.

The problem is vertex shader attributes.  The vertex shader may
declare the attribute as a vec4, but the application may pass the data
using glVertexAttribPointer(... GL_INT ...) (without the I).  At least
some hardware needs to be configured differently for this case than
the ivec4 / glVertexAttribIPointer case.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk5vgkwACgkQX1gOwKyEAw8KhgCfSlplV83Ait3CRLX36ILMlG1+
Z78AnjVrEX6oSUOgCNMbQH3NQzVJJ3Pq
=LrzH
-----END PGP SIGNATURE-----


More information about the mesa-dev mailing list