[Mesa-dev] gallium scaled types

Jose Fonseca jfonseca at vmware.com
Mon Sep 12 11:20:38 PDT 2011



----- Original Message -----
> On 12.09.2011 18:41, Jose Fonseca wrote:
> >
> > ----- Original Message -----
> >> On Mon, Sep 12, 2011 at 5:48 PM, Roland Scheidegger
> >> <sroland at vmware.com> wrote:
> >>> Am 11.09.2011 19:17, schrieb Dave Airlie:
> >>>> On Sun, Sep 11, 2011 at 10:11 AM, Dave Airlie
> >>>> <airlied at gmail.com>
> >>>> wrote:
> >>>>> Hi guys,
> >>>>>
> >>>>> not really finding a great explaination in my 2 minute search,
> >>>>> of
> >>>>> what
> >>>>> the USCALED and SSCALED types are representative of
> >>>>>
> >>>>> On r600 hw at least we have a SCALED type, which seems to be an
> >>>>> integer in float point format, as well as an INT type which is
> >>>>> natural
> >>>>> integers.
> >>>> Talked on irc with calim and mareko, makes sense now, need to
> >>>> add
> >>>> UINT/SINT types
> >>>> will document things maybe a bit more on my way past.
> >>>>
> >>>> will also rename the stencil types.
> >>>
> >>> Hmm what's wrong with them?
> >>> USCALED is a unsigned int type which in contrast to UNORM isn't
> >>> normalized but "scaled" to the actual value (so same as UINT
> >>> really).
> >>> Same for SSCALED which is just signed instead of unsigned.
> >>> And the stencil types seem to fit already.
> >> No, they are not.
> >>
> >> SCALED is an int that is automatically converted to float when
> >> fetched
> >> by a shader.
> >>
> >> The SCALED types are OpenGL's non-normalized *float* vertex
> >> formats
> >> that are stored in memory as ints, e.g. glVertexAttribPointer(...
> >> GL_INT ...). There are no SCALED textures or renderbuffers
> >> supported
> >> by any hardware or exposed by any API known to me. Radeons seem to
> >> be
> >> able to do SCALED types according to the ISA docs, but in practice
> >> it
> >> only works with vertex formats and only with SCALED8 and SCALED16
> >> (AFAIK).
> >>
> >> Then there should be the standard INT types that are not converted
> >> to
> >> float upon shader reads. Those can be specified as vertices by
> >> glVertexAttribIPointer(... GL_INT ...) (note the *I*), or as
> >> integer
> >> textures. This is really missing in Gallium.
> > Pipe formats describe how the data should be interpreted.
> >
> > IMO, the type of register they will be stored after interpretation
> > is beyond the the scope of pipe_format.  I think that is purely in
> > the realm of shaders.
> >
> > For example, when doing texture sampling, if
> > PIPE_R32G32B32A32_SSCALED should be read into a integer register
> > or float registers should be decided by the texture sample opcode.
> >  Not the pipe_format.
> >
> > And in the case of vertex shaders inputs, the desired register type
> > (float, int, double) should be not in pipe_vertex_element at all,
> > but probably in the shader input declaration. Given that it ties
> > more closely with shader itself: an integer vertex input will be
> > used usually with integer opcodes, and vice-versa. Independent of
> > the actually vertices being stored in the vertex buffer as
> > integers or not.
> 
> No. If you declare a shader input as float and you use
> VertexAttribIPointer, you do NOT get a float, even if the shader
> expects
> it.

The GL spec is vague, but NV_vertex_program4 indeed says 

   The commands

      void VertexAttribI[1234]{i,ui}EXT(uint index, T values);
      void VertexAttribI[1234]{i,ui}vEXT(uint index, T values);
      void VertexAttribI4{b,s,ub,us}vEXT(uint index, T values);

    specify fixed-point coordinates that are not converted to floating-point
    values, but instead are represented as signed or unsigned integer values.
    Vertex programs that use integer instructions may read these attributes
    using integer data types.  A vertex program that attempts to read a vertex
    attribute as a float will get undefined results if the attribute was
    specified as an integer, and vice versa.

(Note that there isn't any guarantee of what get or don't get -- it is undefined.)

> The vertex format describes a property of the vertex fetch stage
> (input
> assembler) and determines how data is brought from a vertex buffer
> into
> vertex attribute memory / cache; what the shader does with the data
> is
> completely unrelated.


So basically you're arguing that this it should be part of pipe_vertex_element?

I don't feel strongly either way.


What I'd really like to avoid is another pipe_format suffix with obscure semantics (e.g., PIPE_R32G32B32A32_SSCALED_INT or some weird thing like that).  I'd prefer the register type (DOUBLE, FLOAT, INT) to be set separately.


Jose


More information about the mesa-dev mailing list