[PATCH xserver 1/5] modesetting: Don't call xf86HandleColorMaps() at screen depth 30.

Mario Kleiner mario.kleiner.de at gmail.com
Fri Feb 9 13:28:06 UTC 2018


On 02/09/2018 10:45 AM, Michel Dänzer wrote:
> On 2018-02-09 07:24 AM, Mario Kleiner wrote:
>> On 02/08/2018 03:55 PM, Michel Dänzer wrote:
>>> On 2018-02-08 12:14 PM, Mario Kleiner wrote:
>>>> As it turns out, doing so will make any gamma table updates
>>>> silently fail, because xf86HandleColorMaps() hooks the
>>>> .LoadPalette function to xf86RandR12LoadPalette() if the
>>>> .gamma_set function is supported by the ddx, as is in our
>>>> case.
>>>>
>>>> Once xf86RandR12LoadPalette() has been called during server
>>>> startup, all palette and/or gamma table updates go through
>>>> xf86RandR12CrtcComputeGamma() to combine color palette
>>>> updates with gamma table updates into a proper hardware lut
>>>> for upload into the hw via the .gamma_set function/ioctl,
>>>> passing in a palette with palette_red/green/blue_size == the
>>>> size given by the visuals red/green/blueMask, which will
>>>> be == 1024 for a depth 30 screen.
>>>>
>>>> That in turn is a problem, because the size of the hw lut
>>>> crtc->gamma_size is fixed to 256 slots on all kms-drivers
>>>> when using the legacy gamma_set ioctl, but
>>>> xf86RandR12CrtcComputeGamma() can only handle palettes of a
>>>> size <= the hw lut size. As the palette size of 1024 is greater
>>>> than the hw lut size of 256, the code silently fails
>>>> (gamma_slots == 0 in xf86RandR12CrtcComputeGamma()).
>>>>
>>>> Skipping xf86HandleColormaps() on a depth > 24 screen disables
>>>> color palette handling, but keeps at least gamma table updates
>>>> via xf86vidmode extension and RandR working.
>>>
>>> Sort of... It means xf86VidMode and RandR call directly into the driver
>>> to set their LUTs, with no coordination between them, so whichever calls
>>> into the driver last wins and clobbers the HW LUT.
>>
>> Wouldn't that be desirable behavior in this case, assuming the client
>> that calls last is the one that wants something more specific?
> 
> If it was that simple... :)
>
> Imagine you use something like Night Light / redshift which uses
> xf86VidMode, and some kind of per-monitor gamma calibration tool which
> uses RandR. Surely you want to see the combination of the two
> transformations, not only one or the other, depending on which one has
> ended up calling into the driver last (which could result in
> flip-flopping between the two transformations, depending on how often
> each tool (re-)sets its LUT).
> 

Ok, thanks for the example. That makes sense. And creates new potential 
horror-scenario for me, given that my scientific users need very well 
defined gamma calibrations in many situations. Never thought about 
redshift or such messing with gamma behind the back of my RandR based 
code by having combined gamma lut's. The pitfalls never end...

> 
>>> It would be better to fix xf86RandR12CrtcComputeGamma instead.
>>
>> But how? The current code can upsample an input palette < hw lut by
>> replicating input values onto multiple hw slots, which makes perfect
>> sense. But downsampling a bigger input palette, e.g., by just picking
>> every n'th slot, seems problematic to me. You lose information from the
>> palette in a way that might not be at all what the application wanted if
>> it went to the trouble of creating a custom palette, assuming any apps
>> would do anything meaningful with palettes on a depth 30 screen in the
>> first place.
> 
> You're right that the downsampling could theoretically be a problem. But
> in practice, the LUTs should usually correspond to some kind of more or
> less smooth curve, and I'm told display hardware tends to interpolate
> between the LUT entries.
> 

Depending on hw lut mode.

> 
>> Unless the application just used the default palette or a
>> truecolor visual, like probably most do nowadays, which i assume would
>> be an identity mapping, so it doesn't matter if a palette is applied at
>> all?
> 
> No colormaps means e.g. the gamma sliders in some games don't work.
> 
> 

Ok. I'll try if i can come up with some "take every n'th slot" 
subsampling in xf86RandR12CrtcComputeGamma, and some switching in 
modesetting ddx inside xf86HandleColormaps a la

xf86HandleColormaps(pScreen, 1 << sigRgbBits, 10, ...)

to switch maxColors between 256 and 1024 for depth 24 vs. 30.

That creates a bit of a problem for intel-ddx and nouveau-ddx though, as 
they need the same treatment, but need to work on both older servers and 
an upcoming v1.20. Or special case handling depending on server version.

-mario


More information about the xorg-devel mailing list