[Mesa-dev] gallium scaled types

Jose Fonseca jfonseca at vmware.com
Wed Sep 14 02:30:19 PDT 2011

----- Original Message -----
> On 14.09.2011 09:36, Jose Fonseca wrote:
> > ---- Original Message -----
> >>> On the contrary, I think that putting norm and scaled/int in the
> >>> same sack is talking apples and oranges...   Normalization, like
> >>> fixed-point integers, affects the interpretation of the 32bit
> >>> integer in memory, namely the scale factor that it should be
> >>> multiplied. Whereas  the only difference between _SSCALED and
> >>> _SINT is the final data type (int vs float) -- the value is
> >>> exactly the same (modulo rounding errors).
> >>>
> >>> The pure vs non-pure integer is really a "policy" flag, which
> >>> means
> >>> please do not implicetly convert this to a float at any point of
> >>> the vertex pipeline. And I believe that policy flags should be
> >>> outside enum pipe_format.
> >> While I'm tending to agree with you Jose, the other thing that we
> >> haven't discussed yet is that we need to add new pack/unpack
> >> interfaces to u_format for all these types anyways to get a
> >> integer
> >> clean path (no float converts in the pipeline), increasing the
> >> size
> >> of
> >> the u_format_table anyways. With separate types we could overload
> >> the
> >> float pack/unpack. ones most likely.
> > I'm not sure where u_format's code is used in hardware pipe drivers
> > nowadays, but for software rendering in general and state trakcer
> > software fallback of blits in particular, we could never just
> > overload the float unpack/pack entry-points, as they would corrupt
> > 32bit integers higher/smaller than +/- 1 << 24.  I think we'd need
> > pack/unpack functions that accept/return colors in 32bit integers
> > instead of floats, or another type big enough.
> The pack/unpack/fetch functions of (pure) integer formats wouldn't
> deal
> with floats at all, all they'd do is sign/zero extension 


> (so, the float
> array argument would become void and the function would do the "right
> thing" automatically).

But I don't follow this bit. 

Are you saying the integer versions should be polymorphic in respect with the signedness of the integers?

Take the current

   (*unpack_rgba_float)(float *dst, unsigned dst_stride,
                        const uint8_t *src, unsigned src_stride,
                        unsigned width, unsigned height);

My idea was adding two new integer versions should be

   (*unpack_rgba_32sint)(int32_t *dst, unsigned dst_stride,
                         const uint8_t *src, unsigned src_stride,
                         unsigned width, unsigned height);

   (*unpack_rgba_32uint)(uint32_t *dst, unsigned dst_stride,
                         const uint8_t *src, unsigned src_stride,
                         unsigned width, unsigned height);

Another possibility (which IIUC is what you're suggesting) is:

   (*unpack_rgba_32int)(void *dst, unsigned dst_stride,
                        const uint8_t *src, unsigned src_stride,
                        unsigned width, unsigned height);

where dst points either to uint32_t* or int32_t.

It is indeed appealing proposition.

But are we sure that the uint32 <-> sint32 conversion never needs to be handled?

For example when converting from sint32 to uint32, somebody will need to clamp the negative integers to zero (per http://msdn.microsoft.com/en-us/library/dd607323.aspx#integer_conversion ). So, if the unpack function only does sign extension and no clamping, then the caller will then need to check the signedness of the formats, go over all pixels, and clamp them.  IMO, it would be cleaner and more efficient to have uint32/sint32 entry-points; then all the caller needs to do is choose the approriate entry-point for what's doing.  But if the uint32<->sint32 never appens in practice, then a single entrypoint would indeed do just fine.

> OpenGL functions like ReadPixels don't allow RGBA_INTEGER format
> together with float types, or blits between SINT-FLOAT, SINT-UINT,
> etc.
> anyway ...
> > So either we add (un)pack_rgba_uint32 and (un)pack_rgba_sint32
> > entry-points for those integer formats (and it only needs to be
> > added for the int formats), or a simply add a new xxx_double
> > entrypoint if we think that they will never be used in a critical
> > path.
> Converting everything to doubles would be kind of overkill (and
> potentially slow), and the 6 uint/sint function pointers would
> probably
> remain NULL (or useless) for more than half of the formats.

Agreed.  I mentioned doubles just in the case they are not used in a critical path, i.e., slow is ok, as there is not much merit in optimizing corner cases.  On the other hand, doubles is not a panacea either: they wouldn't handle 64bit integers.

> > It's similar to what's currently done for (un)pack_z_32unorm and
> > (u)pack_s_8uscaled -- 32unorm can loose precision when converted
> > to float (no biggie for colors but inadimissible for depth values)
> > and 8uscaled is much smaller/faster to use than float, so u_format
> > has these special entry-points, but they are defined only on
> > depthstencil formats.
> >
> >
> > I think that software renderers texture sampling code too, will
> > need to be rewritten substantially, to allow the sampling of
> > integers using integer intermediates.   Draw module interpolation
> > code too, will need to use double intermediates to interpolate
> > integer attributes when clipping, etc.   All this
> Integer attributes are not interpolated. They must be declared as
> flat.

Great. Draw module should just work then.

What about texture sampling, are modes other than NEAREST allowed? That would simplify things a lot too.


More information about the mesa-dev mailing list