[Mesa-dev] Gallium pixel formats on big-endian

Alex Deucher alexdeucher at gmail.com
Thu Jan 31 07:22:48 PST 2013


On Thu, Jan 31, 2013 at 9:34 AM, Michel Dänzer <michel at daenzer.net> wrote:
> On Don, 2013-01-31 at 02:14 -0800, Jose Fonseca wrote:
>> ----- Original Message -----
>> > On Mit, 2013-01-30 at 08:35 -0800, Jose Fonseca wrote:
>> > >
>> > > ----- Original Message -----
>> > > > For another example (which I suspect is more relevant for this
>> > > > thread),
>> > > > wouldn't it be nice if the software rendering drivers could
>> > > > directly
>> > > > represent the window system renderbuffer format as a Gallium
>> > > > format
>> > > > in
>> > > > all cases?
>> > >
>> > > I'm missing your point, could you give an example of where that's
>> > > currently not possible?
>> >
>> > E.g. an XImage of depth 16, where the pixels are generally packed in
>> > big
>> > endian if the X server runs on a big endian machine. It's impossible
>> > to
>> > represent that with PIPE_FORMAT_*5*6*5_UNORM packed in little endian.
>>
>> I see.
>>
>> Is this something that could be worked around?
>
> Basically anything can be worked around somehow, right? :)
>
> But in this example, it seems like it would require some kind of
> sideband information to specify that PIPE_FORMAT_*5*6*5_UNORM actually
> has the reversed byte order now, and some layer of the stack to use that
> and swap the bytes accordingly. So, extra copies and an extra
> information channel (and possibly a layering violation).
>
>
>> > > > I can't help feeling it would be better to treat endianness
>> > > > explicitly
>> > > > rather than implicitly in the format description, so drivers and
>> > > > state
>> > > > trackers could choose to use little/big/native/foreign endian
>> > > > formats
>> > > > as
>> > > > appropriate for the hardware and APIs they're dealing with.
>> > >
>> > > What you mean by explicitly vs implicitly? Do you mean r5g6b5_be,
>> > > r5g6b5_le, r32g32b32a32_unorm_le, r32g32b32a32_unorm_be, etc?
>> >
>> > Yeah, something like that, with the byte order only applying within
>> > each
>> > component for array formats.
>>
>> I don't oppose that. But it does seem a lot of work.
>
> I'm afraid so.
>
>> How would hardware drivers handle this? Specially those that have a
>> single LE/BE bit to choose?
>
> I guess drivers would advertise the formats they can and want to support
> given the hardware capabilities and target platforms. For drivers which
> only have to worry about little endian environments, basically nothing
> should change except for the format names and maybe other similar
> details.
>
>
>> (BTW, I do believe we should unify Mesa format handling and Gallium's
>> u_format module into a shared external helper library for formats
>> before we venture into that though as the effort of doing that would
>> pretty much double.
>
> That might be a good idea. The Mesa format code seems to have grown some
> warts of its own anyway.
>
>
>> I think it is also worth considering the other extreme: all formats
>> are expected to be LE on LE platforms, BE on BE platforms.
>
> Right. I think that might be preferable over LE always, if we decide not
> to support both LE/BE explicitly.
>
>> Is this feasible, or are there APIs that need (i.e, require) to handle
>> both LE/BE formats?
>
> Not sure, but my impression has been that APIs tend to prefer the CPU
> native byte order. Anything else makes little sense from an application
> POV. Still, I wouldn't be surprised if there were exceptions, e.g. with
> image/video APIs related to fixed file formats.
>
>> (Or hardware only capable of LE formats?)
>
> Unfortunately, our Southern Islands GPUs no longer have facilities for
> byte-swapping vertex / texture data on the fly.

The DMA engine still supports endian swaps so if we used that for
uploads like r600g now does, we could use the facilities there.

Alex


More information about the mesa-dev mailing list