Big-Endian problem with Fujitsu CoralPA
clemens.koller at anagramm.de
Fri Nov 9 02:30:37 PST 2007
Ian Romanick schrieb:
> Clemens Koller wrote:
>> Eduard Fuchs schrieb:
>> > A few words about my system: This is a custom-designed board, based on a
>> > PowerPC CPU (MPC7448) and Marvell MV64560 chip-set. On the PCI bus are
>> > connnected graphic chip Fujitsu CoralPA(MB86296) and sATA Controller.
>> > The OS is Linux with kernel ver. 2.6.12.
>> > Now the problem – it is an Endian problem. The graphic chip supprts only
>> > up to 16-Bits color mode and Little Endian bits arrangement. This causes
>> > a wrong color picture on the screen. There is no possibility (anyway, I
>> > was not found) to correct the bits seqence direct in the graphics
>> > controller. I can activate bits-swapping on PCI controller (Marvell
>> > chip-set supports 32, resp. 64 bit-swapping). Then the colors are OK,
>> > but now I have errors in the picture representation.
>> Oh, sh***... you seem to have the same problem as we had with
>> the Silicon Motion SM501 graphics chip on PowerPC's PCI (which is
>> quite solved now...)
> And the same problem I'm having on XGI XP10.
>> > Is this a possibility to adapt the X-server resp. Xlib so, that the
>> > pixel-data is from the CPU corrected? As a graphics driver I use
>> > framebufer device driver from the Linux kernel.
>> I don't know the CoralPA driver infrastructure of the kernel.
>> The xorg driver, however, could be fixed in the driver source.
>> There is no need to modify the Xlib etc, AFAICT.
>> The SM501 the xorg driver was modified for PPC architecture and is
>> working now (for me). Just for your reference, see the last commits
>> of the xf86-video-smi501 git tree at:
> The xf86SetWeight fix is technically correct. However, it will cause
> *every* application that uses Render to SEGFAULT. I tried that with the
> XP10 driver, and it prevented gnome from being happy. I don't fully
> understand the details of the problem, but keithp and benh do. *shrug*
*shrug* Can somebody please shed some light on it how it's
supposed to work?
Usually, the PCI graphics devices are detected properly.
What I remember ;-):
The PCI-Bus is per definition little endian, even on big-endian machines
(true on RS6000 / PowerPC / Power Architecture machines).
So, all PCI-mapped registers should be written in little endian byte
order - so, swapped when the powerpc is in big-endian mode. There
are instructions in the PowerPC ISA which can do this on the fly - so,
no performance penalty here, if implemented right.
The memory mapped Video RAM for the Screen image plus the hardware
accelerated stuff (sprites, hardware cursor, ...) should/need
also get it's endianess spapped, then, right?
As far as I know, some PowerPCs can also do transparent memory map
inversions in the MMU (no performance penalty here, also).
But how is that currently realized?
The PCI register accesses are swapped in the driver, but the
Video Memory IO needs to be swapped in Xorg which all the
problems it buys us. From my point of view:
- Font anti-aliasing is broken (in BGRA mode vs. ARGB)
- hardware accelerated functions are pushed from the driver
to the Video RAM in correct (swapped) order. But non
accelerated functions are pushed from X to the mmaped
video ram directly (non-swapped) breaking some things
(Fonts, Colors, Transparency...)
I would be glad to do some tests and debugging here, but
I am not really happy with the lacking documentation / my lilited
knowledge about the details and how it's supposed to be
implemented (in an elegant way = without performance
penalty / limited functionality).
Hey, so many wishes...
R&D Imaging Devices
More information about the xorg