[Mesa-dev] Gallium pixel formats on big-endian
e0425955 at student.tuwien.ac.at
Thu Jan 31 04:02:08 PST 2013
On 31.01.2013 09:30, Michel Dänzer wrote:
> On Mit, 2013-01-30 at 08:35 -0800, Jose Fonseca wrote:
>> ----- Original Message -----
>>> On Mit, 2013-01-30 at 06:12 -0800, Jose Fonseca wrote:
>>>> ----- Original Message -----
>>>>> On Mon, 2013-01-28 at 06:56 -0500, Adam Jackson wrote:
>>>>>> I've been looking at untangling the pixel format code for
>>>>>> My current theory is that blindly byte-swapping values is just
>>>>> Certainly. :) I think you're discovering that this hasn't really
>>>>> thought through beyond what's necessary for things to work with
>>>>> endian CPU and GPU. Any code there is for dealing with big endian
>>>>> has been bolted on as an afterthought.
>>>> My memory is a bit fuzzy, but I thought that we decided that
>>>> formats were always defined in terms of little-endian, which is why
>>>> all need to be byte-swapped. The state tracker was the one
>>>> to translate endian-neutral API formats into the non-neutral
>>> I know that was the suggested solution when this was discussed
>>> previously, but I'm still not really convinced that cuts it. Just for
>>> one example, last time in
>>> 864e97f3-352a-4fdb-9bb7-6d41a1969ccd at zimbra-prod-mbox-2.vmware.com
>>> seemed to agree it doesn't make sense for vertex elements.
>> I couldn't find it by id, but I think you mean:
>> Yes, that's right. (I did say my memory was fuzzy :)
> Yeah, that's what I was referring to.
>>> For another example (which I suspect is more relevant for this
>>> wouldn't it be nice if the software rendering drivers could directly
>>> represent the window system renderbuffer format as a Gallium format
>>> all cases?
>> I'm missing your point, could you give an example of where that's
>> currently not possible?
> E.g. an XImage of depth 16, where the pixels are generally packed in big
> endian if the X server runs on a big endian machine. It's impossible to
> represent that with PIPE_FORMAT_*5*6*5_UNORM packed in little endian.
>>> I can't help feeling it would be better to treat endianness
>>> rather than implicitly in the format description, so drivers and
>>> trackers could choose to use little/big/native/foreign endian formats
>>> appropriate for the hardware and APIs they're dealing with.
>> What you mean by explicitly vs implicitly? Do you mean r5g6b5_be,
>> r5g6b5_le, r32g32b32a32_unorm_le, r32g32b32a32_unorm_be, etc?
> Yeah, something like that, with the byte order only applying within each
> component for array formats.
It's a bit tricky, formats with 16 and 32 bpc seem to already have these
components ordered in the host byte order (if the GPU endianness switch
is set to that).
At least I think that has to be the case because we certainly don't
bswap all the vertex data and things work, and that suggests it affects
colours, too, or using vertex buffers as textures wouldn't work.
Formats like B8G8R8A8_UNORM or R5G6B5_UNORM might be treated as a word
by the GPU (not sure, I don't have a BE machine), and thus byte order
could differ between BE and LE.
So RGBA32_UNORM_LE/BE doesn't seem useful (or at least not usable, since
the endian switch usually isn't per RT/texture/vertex buffer/...), and
RGBA8_LE/BE would be confusing, because we have the rule (*) that the
component written to the left resides at a lower address. What would
RGBA8_BE be ? R at lowest address on BE and A at lowest address on LE ?
Just use ABGR8 on LE then.
Otoh, adding G3B5R5G3 (the bswapped version of R5G6B5) would look a bit
awkward, but would be correct gallium terminology on BE if the format is
treated as a 16 bit word. That's probably why they started the
inconsistent naming we had in gallium in the past in the first place.
Maybe we should just define some formats as affected by endianness,
because G3B5R5G3 certainly won't be supported if GPU is set to LE. But
then the rule (*) has to be stated with respect to a specific endianness.
Let me think about that some more ...
More information about the mesa-dev