[Mesa-dev] [PATCH 4/8] mesa: fill in signed cases and RGBA16 in _mesa_format_matches_format_and_type
Michel Dänzer
michel at daenzer.net
Wed Jan 30 08:54:31 PST 2013
On Mit, 2013-01-30 at 16:01 +0100, Marek Olšák wrote:
> On Wed, Jan 30, 2013 at 11:58 AM, Michel Dänzer <michel at daenzer.net> wrote:
> > On Die, 2013-01-29 at 21:22 +0100, Marek Olšák wrote:
> >> On Tue, Jan 29, 2013 at 4:31 PM, Michel Dänzer <michel at daenzer.net> wrote:
> >> > On Die, 2013-01-29 at 14:43 +0100, Marek Olšák wrote:
> >> >> ---
> >> >> src/mesa/main/formats.c | 30 ++++++++++++++++++++++++++----
> >> >> 1 file changed, 26 insertions(+), 4 deletions(-)
> >> >>
> >> >> diff --git a/src/mesa/main/formats.c b/src/mesa/main/formats.c
> >> >> index 0273425..b86fb9e 100644
> >> >> --- a/src/mesa/main/formats.c
> >> >> +++ b/src/mesa/main/formats.c
> >> [...]
> >> >> @@ -3264,12 +3270,17 @@ _mesa_format_matches_format_and_type(gl_format gl_format,
> >> >> return GL_FALSE;
> >> >>
> >> >> case MESA_FORMAT_SIGNED_R16:
> >> >> + return format == GL_RED && type == GL_SHORT && littleEndian &&
> >> >> + !swapBytes;
> >> >> case MESA_FORMAT_SIGNED_GR1616:
> >> >> + return format == GL_RG && type == GL_SHORT && littleEndian && !swapBytes;
> >> >
> >> > GL_SHORT is in host byte order, so checking for littleEndian here
> >> > incorrectly excludes big endian hosts.
> >>
> >> Does that apply only to X16, or even to X16Y16, or even to X16Y16Z16W16?
> >
> > Hmm. AFAICT MESA_FORMAT_*1616 are currently defined as 32 bit packed
> > values, so the line you added for MESA_FORMAT_SIGNED_GR1616 is actually
> > correct.
> >
> > OTOH e.g. MESA_FORMAT_RGBA_16 appears to be defined as an array of 16
> > bit values, so that could be treated the same way as
> > MESA_FORMAT_SIGNED_R16.
> >
> > I wonder if it wouldn't make sense to replace MESA_FORMAT_*1616 with
> > array based formats as well. AFAICT there's nothing like GL_INT_16_16 in
> > OpenGL.
>
> I don't really get this distinction between array and non-array based
> formats. Most Mesa formats map to Gallium formats and most of Gallium
> ones are array-based regardless of how Mesa formats are defined,
> right?
If only it was that simple. :\ Keep in mind that many Mesa format
definitions predate Gallium.
> As a matter of fact, I added lots of Mesa formats over the
> years, e.g. MESA_FORMAT_SIGNED_GR1616, which may look like packed to
> you, but all I ever cared about was that it was equivalent to
> PIPE_FORMAT_R16G16_SNORM and I gave it a name which was consistent
> with the naming of other Mesa formats. The way I understand it, the
> component order in names of Mesa formats is written the other way
> around compared to gallium formats. E.g. MESA_FORMAT_RGBA8888 =
> PIPE_FORMAT_A8B8G8R8_UNORM. That's the case with most formats, except
> deviations in naming like MESA_FORMAT_RGBA_16, where the component
> order matches gallium.
I don't really care about the names (as the saying goes, a rose by any
other name...). If you look at the entry in formats.h:
MESA_FORMAT_SIGNED_GR1616, /* GGGG GGGG GGGG GGGG RRRR RRRR RRRR RRRR */
and at pack_float_SIGNED_GR1616():
GLuint *d = (GLuint *) dst;
GLshort r = FLOAT_TO_SHORT(CLAMP(src[RCOMP], -1.0f, 1.0f));
GLshort g = FLOAT_TO_SHORT(CLAMP(src[GCOMP], -1.0f, 1.0f));
*d = (g << 16) | (r & 0xffff);
This defines the format as a 32 bit packed value, with the R component
stored in the least significant bits and the G component stored in the
most significant bits. This definition differs between little and big
endian hosts, even ignoring the encoding of the components themselves.
However, as you pointed out, PIPE_FORMAT_R16G16_SNORM, which is what the
state tracker currently uses for MESA_FORMAT_SIGNED_GR1616, is indeed an
array format. And as I pointed out, AFAICT there is no way in OpenGL to
specify a packing that exactly matches the Mesa definition, as there is
no GL_INT_16_16(_REV). In that regard, at least Gallium would be
consistent with OpenGL, if it wasn't for the components themselves being
encoded as little endian in Gallium but in host byte order in OpenGL...
So in this example, all three definitions are slightly different on big
endian hosts, but happen to be the same on little endian hosts.
Does this make it a little clearer what I was talking about?
It's easy to be unaware of these issues in a little endian only
environment, which tends to result in a mixture of code which
effectively assumes either little endian or CPU native byte order.
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast | Debian, X and DRI developer
More information about the mesa-dev
mailing list