[PATCH] drm: fourcc byteorder: brings header file comments in line with reality.

Michel Dänzer michel at daenzer.net
Mon Apr 24 06:57:02 UTC 2017


On 22/04/17 07:05 PM, Ville Syrjälä wrote:
> On Fri, Apr 21, 2017 at 06:14:31PM +0200, Gerd Hoffmann wrote:
>>   Hi,
>>
>>>> My personal opinion is that formats in drm_fourcc.h should be 
>>>> independent of the CPU byte order and the function 
>>>> drm_mode_legacy_fb_format() and drivers depending on that incorrect 
>>>> assumption be fixed instead.
>>>
>>> The problem is this isn't a kernel-internal thing any more.  With the
>>> addition of the ADDFB2 ioctl the fourcc codes became part of the
>>> kernel/userspace abi ...
>>
>> Ok, added some printk's to the ADDFB and ADDFB2 code paths and tested a
>> bit.  Apparently pretty much all userspace still uses the ADDFB ioctl.
>> xorg (modesetting driver) does.  gnome-shell in wayland mode does.
>> Seems the big transition to ADDFB2 didn't happen yet.
>>
>> I guess that makes changing drm_mode_legacy_fb_format + drivers a
>> reasonable option ...
> 
> Yeah, I came to the same conclusion after chatting with some
> folks on irc.
> 
> So my current idea is that we change any driver that wants to follow the
> CPU endianness

This isn't really optional for various reasons, some of which have been
covered in this discussion.


> to declare support for big endian formats if the CPU is
> big endian. Presumably these are mostly the virtual GPU drivers.
> 
> Additonally we'll make the mapping performed by drm_mode_legacy_fb_format()
> driver controlled. That way drivers that got changed to follow CPU
> endianness can return a framebuffer that matches CPU endianness. And
> drivers that expect the GPU endianness to not depend on the CPU
> endianness will keep working as they do now. The downside is that users
> of the legacy addfb ioctl will need to magically know which endianness
> they will get, but that is apparently already the case. And users of
> addfb2 will keep on specifying the endianness explicitly with
> DRM_FORMAT_BIG_ENDIAN vs. 0.

I'm afraid it's not that simple.

The display hardware of older (pre-R600 generation) Radeon GPUs does not
support the "big endian" formats directly. In order to allow userspace
to access pixel data in native endianness with the CPU, we instead use
byte-swapping functionality which only affects CPU access. This means
that the GPU and CPU effectively see different representations of the
same video memory contents.

Userspace code dealing with GPU access to pixel data needs to know the
format as seen by the GPU, whereas code dealing with CPU access needs to
know the format as seen by the CPU. I don't see any way to express this
with a single format definition.


-- 
Earthling Michel Dänzer               |               http://www.amd.com
Libre software enthusiast             |             Mesa and X developer


More information about the amd-gfx mailing list