[Mesa-dev] Gallium pixel formats on big-endian

Jose Fonseca jfonseca at vmware.com
Thu Jan 31 06:42:44 PST 2013



----- Original Message -----
> On Don, 2013-01-31 at 02:14 -0800, Jose Fonseca wrote:
> > ----- Original Message -----
> > > On Mit, 2013-01-30 at 08:35 -0800, Jose Fonseca wrote:
> > > > 
> > > > ----- Original Message -----
> > > > > For another example (which I suspect is more relevant for
> > > > > this
> > > > > thread),
> > > > > wouldn't it be nice if the software rendering drivers could
> > > > > directly
> > > > > represent the window system renderbuffer format as a Gallium
> > > > > format
> > > > > in
> > > > > all cases?
> > > > 
> > > > I'm missing your point, could you give an example of where
> > > > that's
> > > > currently not possible?
> > > 
> > > E.g. an XImage of depth 16, where the pixels are generally packed
> > > in
> > > big
> > > endian if the X server runs on a big endian machine. It's
> > > impossible
> > > to
> > > represent that with PIPE_FORMAT_*5*6*5_UNORM packed in little
> > > endian.
> > 
> > I see.
> > 
> > Is this something that could be worked around?
> 
> Basically anything can be worked around somehow, right? :)

I meant in a satisfactory manner.

> But in this example, it seems like it would require some kind of
> sideband information to specify that PIPE_FORMAT_*5*6*5_UNORM
> actually
> has the reversed byte order now, and some layer of the stack to use
> that
> and swap the bytes accordingly. So, extra copies and an extra
> information channel (and possibly a layering violation).

I was thinking more around blacklisting non RGBX8 visuals on the X server in BE platforms when GLX is enabled for this hardware.  I mean, is it worth it to even support it?

> > > > > I can't help feeling it would be better to treat endianness
> > > > > explicitly
> > > > > rather than implicitly in the format description, so drivers
> > > > > and
> > > > > state
> > > > > trackers could choose to use little/big/native/foreign endian
> > > > > formats
> > > > > as
> > > > > appropriate for the hardware and APIs they're dealing with.
> > > > 
> > > > What you mean by explicitly vs implicitly? Do you mean
> > > > r5g6b5_be,
> > > > r5g6b5_le, r32g32b32a32_unorm_le, r32g32b32a32_unorm_be, etc?
> > > 
> > > Yeah, something like that, with the byte order only applying
> > > within
> > > each
> > > component for array formats.
> > 
> > I don't oppose that. But it does seem a lot of work.
> 
> I'm afraid so.
> 
> > How would hardware drivers handle this? Specially those that have a
> > single LE/BE bit to choose?
> 
> I guess drivers would advertise the formats they can and want to
> support
> given the hardware capabilities and target platforms. For drivers
> which
> only have to worry about little endian environments, basically
> nothing
> should change except for the format names and maybe other similar
> details.
> 
> 
> > (BTW, I do believe we should unify Mesa format handling and
> > Gallium's
> > u_format module into a shared external helper library for formats
> > before we venture into that though as the effort of doing that
> > would
> > pretty much double.
> 
> That might be a good idea. The Mesa format code seems to have grown
> some
> warts of its own anyway.
> 
> 
> > I think it is also worth considering the other extreme: all formats
> > are expected to be LE on LE platforms, BE on BE platforms.
> 
> Right. I think that might be preferable over LE always, if we decide
> not
> to support both LE/BE explicitly.
> 
> > Is this feasible, or are there APIs that need (i.e, require) to
> > handle
> > both LE/BE formats?
> 
> Not sure, but my impression has been that APIs tend to prefer the CPU
> native byte order. Anything else makes little sense from an
> application
> POV. Still, I wouldn't be surprised if there were exceptions, e.g.
> with
> image/video APIs related to fixed file formats.
> 
> > (Or hardware only capable of LE formats?)
> 
> Unfortunately, our Southern Islands GPUs no longer have facilities
> for
> byte-swapping vertex / texture data on the fly.
> 
> 
> > If not, would it be feasible to byte-swap at state tracker level?
> 
> That should certainly be feasible for texture data, as that generally
> involves at least one copy anyway. However, it might hurt for
> streaming
> vertex data. Also, we might have to be careful not to require double
> byte-swapping in cases where simple copies would work or no copies
> would
> be necessary in the first place.
> 
> 
> > In short, in order to support BE platforms properly, there will be
> > some pain regardless the approach we take. I really don't feel
> > strongly about any approach -- just want a level of pain we (ie.
> > the
> > whole community) can sustain. Because if the "right thing" is
> > onerous
> > and few care I suspect this will quickly rot or never get
> > completed.
> 
> I agree. I'm hoping that dealing with byte order more explicitly will
> make it less likely to be ignored. But I'm not pretending to know
> what
> the best solution is. I'm mainly trying to raise awareness of the
> issues.
> 
> 
> --
> Earthling Michel Dänzer           |
>                   http://www.amd.com
> Libre software enthusiast         |          Debian, X and DRI
> developer
> 


More information about the mesa-dev mailing list