[Mesa-dev] Gallium pixel formats on big-endian
Michel Dänzer
michel at daenzer.net
Thu Jan 31 00:30:26 PST 2013
On Mit, 2013-01-30 at 08:35 -0800, Jose Fonseca wrote:
>
> ----- Original Message -----
> > On Mit, 2013-01-30 at 06:12 -0800, Jose Fonseca wrote:
> > >
> > > ----- Original Message -----
> > > > On Mon, 2013-01-28 at 06:56 -0500, Adam Jackson wrote:
> > > > > I've been looking at untangling the pixel format code for
> > > > > big-endian.
> > > > > My current theory is that blindly byte-swapping values is just
> > > > > wrong.
> > > >
> > > > Certainly. :) I think you're discovering that this hasn't really
> > > > been
> > > > thought through beyond what's necessary for things to work with
> > > > little
> > > > endian CPU and GPU. Any code there is for dealing with big endian
> > > > CPUs
> > > > has been bolted on as an afterthought.
> > >
> > > My memory is a bit fuzzy, but I thought that we decided that
> > > gallium
> > > formats were always defined in terms of little-endian, which is why
> > > all need to be byte-swapped. The state tracker was the one
> > > responsible
> > > to translate endian-neutral API formats into the non-neutral
> > > gallium
> > > ones.
> >
> > I know that was the suggested solution when this was discussed
> > previously, but I'm still not really convinced that cuts it. Just for
> > one example, last time in
> > 864e97f3-352a-4fdb-9bb7-6d41a1969ccd at zimbra-prod-mbox-2.vmware.com
> > you
> > seemed to agree it doesn't make sense for vertex elements.
>
> I couldn't find it by id, but I think you mean:
>
> http://lists.freedesktop.org/archives/mesa-dev/2011-April/007109.html
>
> Yes, that's right. (I did say my memory was fuzzy :)
Yeah, that's what I was referring to.
> > For another example (which I suspect is more relevant for this
> > thread),
> > wouldn't it be nice if the software rendering drivers could directly
> > represent the window system renderbuffer format as a Gallium format
> > in
> > all cases?
>
> I'm missing your point, could you give an example of where that's
> currently not possible?
E.g. an XImage of depth 16, where the pixels are generally packed in big
endian if the X server runs on a big endian machine. It's impossible to
represent that with PIPE_FORMAT_*5*6*5_UNORM packed in little endian.
> > I can't help feeling it would be better to treat endianness
> > explicitly
> > rather than implicitly in the format description, so drivers and
> > state
> > trackers could choose to use little/big/native/foreign endian formats
> > as
> > appropriate for the hardware and APIs they're dealing with.
>
> What you mean by explicitly vs implicitly? Do you mean r5g6b5_be,
> r5g6b5_le, r32g32b32a32_unorm_le, r32g32b32a32_unorm_be, etc?
Yeah, something like that, with the byte order only applying within each
component for array formats.
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast | Debian, X and DRI developer
More information about the mesa-dev
mailing list