[Mesa-dev] gallium scaled types
Jose Fonseca
jfonseca at vmware.com
Mon Sep 12 11:51:13 PDT 2011
----- Original Message -----
> On Mon, Sep 12, 2011 at 6:41 PM, Jose Fonseca <jfonseca at vmware.com>
> wrote:
> >
> >
> > ----- Original Message -----
> >> On Mon, Sep 12, 2011 at 5:48 PM, Roland Scheidegger
> >> <sroland at vmware.com> wrote:
> >> > Am 11.09.2011 19:17, schrieb Dave Airlie:
> >> >> On Sun, Sep 11, 2011 at 10:11 AM, Dave Airlie
> >> >> <airlied at gmail.com>
> >> >> wrote:
> >> >>> Hi guys,
> >> >>>
> >> >>> not really finding a great explaination in my 2 minute search,
> >> >>> of
> >> >>> what
> >> >>> the USCALED and SSCALED types are representative of
> >> >>>
> >> >>> On r600 hw at least we have a SCALED type, which seems to be
> >> >>> an
> >> >>> integer in float point format, as well as an INT type which is
> >> >>> natural
> >> >>> integers.
> >> >>
> >> >> Talked on irc with calim and mareko, makes sense now, need to
> >> >> add
> >> >> UINT/SINT types
> >> >> will document things maybe a bit more on my way past.
> >> >>
> >> >> will also rename the stencil types.
> >> >
> >> >
> >> > Hmm what's wrong with them?
> >> > USCALED is a unsigned int type which in contrast to UNORM isn't
> >> > normalized but "scaled" to the actual value (so same as UINT
> >> > really).
> >> > Same for SSCALED which is just signed instead of unsigned.
> >> > And the stencil types seem to fit already.
> >>
> >> No, they are not.
> >>
> >> SCALED is an int that is automatically converted to float when
> >> fetched
> >> by a shader.
> >>
> >> The SCALED types are OpenGL's non-normalized *float* vertex
> >> formats
> >> that are stored in memory as ints, e.g. glVertexAttribPointer(...
> >> GL_INT ...). There are no SCALED textures or renderbuffers
> >> supported
> >> by any hardware or exposed by any API known to me. Radeons seem to
> >> be
> >> able to do SCALED types according to the ISA docs, but in practice
> >> it
> >> only works with vertex formats and only with SCALED8 and SCALED16
> >> (AFAIK).
> >>
> >> Then there should be the standard INT types that are not converted
> >> to
> >> float upon shader reads. Those can be specified as vertices by
> >> glVertexAttribIPointer(... GL_INT ...) (note the *I*), or as
> >> integer
> >> textures. This is really missing in Gallium.
> >
> > Pipe formats describe how the data should be interpreted.
> >
> > IMO, the type of register they will be stored after interpretation
> > is beyond the the scope of pipe_format. I think that is purely in
> > the realm of shaders.
> >
> > For example, when doing texture sampling, if
> > PIPE_R32G32B32A32_SSCALED should be read into a integer register
> > or float registers should be decided by the texture sample opcode.
> > Not the pipe_format.
> >
> > And in the case of vertex shaders inputs, the desired register type
> > (float, int, double) should be not in pipe_vertex_element at all,
> > but probably in the shader input declaration. Given that it ties
> > more closely with shader itself: an integer vertex input will be
> > used usually with integer opcodes, and vice-versa. Independent of
> > the actually vertices being stored in the vertex buffer as
> > integers or not.
>
> That's not exactly what would be best for drivers. There actually are
> 3 basic types of vertex formats on r600: NORM, INT, and SCALED. See
> e.g. the SQ_MICRO:SQ_VTX_WORD1 opcode:
>
> NUM_FORMAT_ALL 29:28
> Format of returning data (N is the number of bits derived from
> DATA_FORMAT and gamma) (ignored if USE_CONST_FIELDS = 1).
>
> POSSIBLE VALUES:
> 00 - SQ_NUM_FORMAT_NORM: repeating fraction number (0.N) with range
> [0, 1] if unsigned, or [-1, 1] if signed.
> 01 - SQ_NUM_FORMAT_INT: integer number (N.0) with range [0, 2^N] if
> unsigned, or [-2^M, 2^M] if signed (M = N - 1).
> 02 - SQ_NUM_FORMAT_SCALED: integer number stored as a S23E8
> floating-point representation (1 == 0x3f800000).
>
> So it would be useful to have *NORM, *INT, and *SCALED formats in
> Gallium to make the translation straightforward. It would also be
> best
> to have the same info in shaders. OpenGL has that info in the
> interface (the *IPointer functions) and in the shaders too (the ivecN
> types).
This is what the OpenGL spec says about VertexAttribIPointer:
Data for an array specified by VertexAttribPointer will
be converted to floating-point by normalizing if normalized is TRUE, and converted
directly to floating-point otherwise. Data for an array specified by VertexAttribI-
Pointer will always be left as integer values; such data are referred to as pure
integers.
Formats describe how to interpret the data in memory, and normalization is an important part of that interpretation. But this "integer" vs "pure integer" distinction merely describes the recipient of that interpretation , and not the source interpretation itself. In this is merely to ensure no precision loss, although a recipient that can hold the data without loss could be determined without any assistance.
AFAICS, saying that the integers should be kept as integers only makes sense in the context of shaders, and nothing else. A blit from a integer texture to a pure integer texture is nothing but a memcpy.
Gallium has formats named as PIPE_xxx_xSCALED for historical reasons, but PIPE_xxx_xINT would be better. They describe integers in memory.
Note that D3D10 has no concept of PIPE_xxx_xSCALED vs PIPE_xxx_xINT formats either [1].
> Likewise for textures. We have something similar for
> rectangle
> textures. There is the texture type PIPE_TEXTURE_RECT in the
> interface
> and the TGSI_TEXTURE_RECT fetch instruction type. I think the integer
> textures should follow suit to make hardware driver implementations
> sane.
I really don't follow the analogy between texture rectangles and pixel formats...
Jose
[1] http://msdn.microsoft.com/en-us/library/bb173059.aspx
[2] http://msdn.microsoft.com/en-us/library/ff476180(v=VS.85).aspx
More information about the mesa-dev
mailing list