[PATCH xserver 5/7] xfree86/modes: Adapt xf86Randr12CrtcComputeGamma() for depth 30.

Ville Syrjälä ville.syrjala at linux.intel.com
Tue Feb 20 16:14:33 UTC 2018


On Tue, Feb 20, 2018 at 05:06:32AM +0100, Mario Kleiner wrote:
> At screen depths > 24 bit, the color palettes passed into
> xf86Randr12CrtcComputeGamma() can have a larger number of slots
> than the crtc's hardware lut. E.g., at depth 30, 1024 palette
> slots vs. 256 hw lut slots. This palette size > crtc gamma size
> case is not handled yet and leads to silent failure, so gamma
> table updates do not happen.
> 
> Add a new subsampling path for this case, which only takes
> every n'th slot from the input palette, e.g., at depth 30,
> every 4th slot for a 1024 slot palette vs. 256 slot hw lut.
> 
> This makes lut updates work again, as tested with the xgamma
> utility (uses XF86VidMode extension) and some RandR based
> gamma ramp animation.
> 
> Signed-off-by: Mario Kleiner <mario.kleiner.de at gmail.com>
> ---
>  hw/xfree86/modes/xf86RandR12.c | 99 ++++++++++++++++++++++++++++++------------
>  1 file changed, 72 insertions(+), 27 deletions(-)
> 
> diff --git a/hw/xfree86/modes/xf86RandR12.c b/hw/xfree86/modes/xf86RandR12.c
> index fe8770d..8947622 100644
> --- a/hw/xfree86/modes/xf86RandR12.c
> +++ b/hw/xfree86/modes/xf86RandR12.c
> @@ -1258,40 +1258,85 @@ xf86RandR12CrtcComputeGamma(xf86CrtcPtr crtc, LOCO *palette,
>  
>      for (shift = 0; (gamma_size << shift) < (1 << 16); shift++);
>  
> -    gamma_slots = crtc->gamma_size / palette_red_size;
> -    for (i = 0; i < palette_red_size; i++) {
> -        value = palette[i].red;
> -        if (gamma_red)
> -            value = gamma_red[value];
> -        else
> -            value <<= shift;
> +    if (crtc->gamma_size >= palette_red_size) {
> +        /* Upsampling of smaller palette to larger hw lut size */
> +        gamma_slots = crtc->gamma_size / palette_red_size;
> +        for (i = 0; i < palette_red_size; i++) {
> +            value = palette[i].red;
> +            if (gamma_red)
> +                value = gamma_red[value];
> +            else
> +                value <<= shift;
> +
> +            for (j = 0; j < gamma_slots; j++)
> +                crtc->gamma_red[i * gamma_slots + j] = value;
> +        }
> +    } else {
> +        /* Downsampling of larger palette to smaller hw lut size */
> +        gamma_slots = palette_red_size / crtc->gamma_size;
> +        for (i = 0; i < crtc->gamma_size; i++) {
> +            value = palette[i * gamma_slots].red;

That's not going to reach the max index of the palette, and it'll not
work correctly when the sizes aren't a nice integer multiple of each
other. Eg. intel hw has interpolated gamma modes which have 2^n+1
entries in the LUT.

So I'm thinking we want this instead:
value = palette[i * (palette_size - 1) / (gamma_size - 1)];

-- 
Ville Syrjälä
Intel OTC


More information about the xorg-devel mailing list