[Intel-gfx] [PATCH 5/6] drm/i915/bdw+: Implement color management
Daniel Stone
daniel at fooishbar.org
Fri Dec 18 08:53:50 PST 2015
Hi,
On 17 December 2015 at 18:57, Lionel Landwerlin
<lionel.g.landwerlin at intel.com> wrote:
> @@ -289,22 +289,30 @@ static const struct intel_device_info intel_haswell_m_info = {
> static const struct intel_device_info intel_broadwell_d_info = {
> HSW_FEATURES,
> .gen = 8,
> + .num_samples_after_ctm = BDW_SPLITGAMMA_MAX_VALS,
> + .num_samples_before_ctm = BDW_DEGAMMA_MAX_VALS,
> };
Nitpick I know, but please invert before and after for the sake of my sanity.
> +static void bdw_write_8bit_gamma_legacy(struct drm_device *dev,
> + struct drm_r32g32b32 *correction_values, i915_reg_t palette)
> +{
> + u16 blue_fract, green_fract, red_fract;
> + u32 blue, green, red;
> + u32 count = 0;
> + u32 word = 0;
> + struct drm_i915_private *dev_priv = dev->dev_private;
> +
> + while (count < BDW_8BIT_GAMMA_MAX_VALS) {
> + blue = correction_values[count].b32;
> + green = correction_values[count].g32;
> + red = correction_values[count].r32;
> +
> + /*
> + * Maximum possible gamma correction value supported
> + * for BDW is 0xFFFFFFFF, so clamp the values accordingly
> + */
Eh? 0xFFFFFFFF != (1 << 24) - 1.
> + if (blue >= BDW_MAX_GAMMA)
> + blue = BDW_MAX_GAMMA;
> + if (green >= BDW_MAX_GAMMA)
> + green = BDW_MAX_GAMMA;
> + if (red >= BDW_MAX_GAMMA)
> + red = BDW_MAX_GAMMA;
> rather than >=.
> + /*
> + * Gamma correction values are sent in 8.24 format
> + * with 8 int and 24 fraction bits. BDW 10 bit gamma
> + * unit expects correction registers to be programmed in
> + * 0.10 format, with 0 int and 16 fraction bits. So take
'0.10 format, with 0 int and 16 fraction bits' ...
> + * MSB 10 bit values(bits 23-14) from the fraction part and
> + * prepare the correction registers.
> + */
The comment here is helpful, but colour me confused: we just discard
the 8 integer bits anyway? Why do they exist? Every time I think I've
worked out which of [0,1.0] and [0,8.0) is the correct range, I
realise I'm wrong.
> +static void bdw_write_12bit_gamma_precision(struct drm_device *dev,
> + struct drm_r32g32b32 *correction_values, i915_reg_t pal_prec_data,
> + enum pipe pipe)
Why does this take a pipe where all the others don't, and also not
take a num_coeff, also unlike the others?
> + /* Program first 512 values in precision palette */
> + while (count < BDW_12BIT_GAMMA_MAX_VALS - 1) {
Um, this is a for loop.
> + /*
> + * Maximum possible gamma correction value supported
> + * for BDW is 0xFFFFFFFF, so clamp the values accordingly
> + */
Again, not 0xffffffff.
> + /*
> + * Framework's general gamma format is 8.24 (8 int 16 fraction)
8 int 16?!
> + * BDW Platform's supported gamma format is 16 bit correction
> + * values in 0.16 format. So extract higher 16 fraction bits
> + * from 8.24 gamma correction values.
Isn't this the 12-bit mode? Colour me stupid, but what makes this
12-bit rather than 16-bit? Anyway.
> + */
> + red_fract = GET_BITS(red, 8, 16);
> + green_fract = GET_BITS(green, 8, 16);
> + blue_fract = GET_BITS(blue, 8, 16);
Okay, at least the range is consistent here - still [0,1.0], with the
integer component totally ignored.
> + gamma_data = (struct drm_palette *)blob->data;
> + pipe = to_intel_crtc(crtc)->pipe;
> + num_samples = blob->length / sizeof(struct drm_r32g32b32);
> +
> + pal_prec_index = _MMIO(_PREC_PAL_INDEX(pipe));
> + pal_prec_data = _MMIO(_PREC_PAL_DATA(pipe));
> + correction_values = (struct drm_r32g32b32 *)&gamma_data->lut;
> +
> + switch (num_samples) {
> + case GAMMA_DISABLE_VALS:
> + /* Disable Gamma functionality on Pipe */
> + DRM_DEBUG_DRIVER("Disabling gamma on Pipe %c\n",
> + pipe_name(pipe));
> + if ((mode & GAMMA_MODE_MODE_MASK) == GAMMA_MODE_MODE_12BIT)
> + bdw_reset_gamma(dev_priv, pipe);
> + state->palette_after_ctm_blob = NULL;
> + word = GAMMA_MODE_MODE_8BIT;
> + break;
Right, so we program the gamma unit to 8-bit mode (sensible!), and
write a linear palette (sensible!) ... but only if we were previously
in 12-bit gamma mode. What? So, if I start with 10-bit gamma and then
disable the gamma unit, my gamma won't be disabled, but will be an
8-bit interpretation of the previous 10-bit gamma ramp? Ouch. Also
broken when going from 8-bit -> disabled. Surely just make that check
unconditional.
(Again, this should also be blob == NULL, not a blob with length 0
which is subsequently leaked. Wait, how do you even create a length-0
blob ... ?!)
> + case BDW_8BIT_GAMMA_MAX_VALS:
> + /* Legacy palette */
> + bdw_write_8bit_gamma_legacy(dev, correction_values,
> + LGC_PALETTE(pipe, 0));
> + word = GAMMA_MODE_MODE_8BIT;
> + break;
> +
> + case BDW_SPLITGAMMA_MAX_VALS:
> + index = num_samples;
> + index |= BDW_INDEX_AUTO_INCREMENT | BDW_INDEX_SPLIT_MODE;
> + I915_WRITE(pal_prec_index, index);
> + bdw_write_10bit_gamma_precision(dev, correction_values,
> + pal_prec_data, BDW_SPLITGAMMA_MAX_VALS);
> + word = GAMMA_MODE_MODE_SPLIT;
> + break;
> +
> + case BDW_12BIT_GAMMA_MAX_VALS:
> + index = BDW_INDEX_AUTO_INCREMENT;
> + I915_WRITE(pal_prec_index, index);
> + bdw_write_12bit_gamma_precision(dev, correction_values,
> + pal_prec_data, pipe);
> + word = GAMMA_MODE_MODE_12BIT;
> + break;
> +
> + case BDW_10BIT_GAMMA_MAX_VALS:
> + index = BDW_INDEX_AUTO_INCREMENT;
> + I915_WRITE(pal_prec_index, index);
> + bdw_write_10bit_gamma_precision(dev, correction_values,
> + pal_prec_data, BDW_10BIT_GAMMA_MAX_VALS);
> + word = GAMMA_MODE_MODE_10BIT;
> + break;
This is pretty upsetting; we appear to have:
- 8-bit gamma: length 256 on BDW and CHV (consistency! but CHV seems
to just use 10-bit anyway)
- 10-bit gamma: length 257 on CHV, 1024 on BDW (or 512 for split)
- 12-bit gamma: length 513 on BDW
How is generic userspace supposed to determine this? We will never
have these kinds of per-device lookup tables. Using semi-arbitrary
length values to determine this is absolutely the wrong approach. How
about exposing it through properties? Maybe declare
PALETTE_AFTER_CTM_8BIT (hardcoded to 256), PALETTE_AFTER_CTM_10BIT,
etc, with the lengths required? Or maybe an array of values, sorted in
ascending order of entry depth (i.e. [ 8BIT_LENGTH, 10BIT_LENGTH,
12BIT_LENGTH ]).
Is split-gamma mode actually bound to 10-bit (and that table length)
only, or is it independent of mode? I don't really know what to
suggest about that one.
> + switch (num_samples) {
> + case GAMMA_DISABLE_VALS:
> + /* Disable degamma on Pipe */
> + mode = I915_READ(GAMMA_MODE(pipe)) & ~GAMMA_MODE_MODE_MASK;
> + I915_WRITE(GAMMA_MODE(pipe), mode | GAMMA_MODE_MODE_8BIT);
Why aren't these writes coalesced as they are in bdw_set_gamma?
> + state->palette_before_ctm_blob = NULL;
> + DRM_DEBUG_DRIVER("Disabling degamma on Pipe %c\n",
> + pipe_name(pipe));
> + break;
blob == NULL.
> + case BDW_SPLITGAMMA_MAX_VALS:
> + pal_prec_index = _MMIO(_PREC_PAL_INDEX(pipe));
> + pal_prec_data = _MMIO(_PREC_PAL_DATA(pipe));
> + correction_values = degamma_data->lut;
> +
> + index = BDW_INDEX_AUTO_INCREMENT | BDW_INDEX_SPLIT_MODE;
> + I915_WRITE(pal_prec_index, index);
> +
> + bdw_write_10bit_gamma_precision(dev,
> + correction_values,
> + pal_prec_data,
> + BDW_SPLITGAMMA_MAX_VALS);
> +
> + /* Enable degamma on Pipe */
> + mode = I915_READ(GAMMA_MODE(pipe));
> + mode &= ~GAMMA_MODE_MODE_MASK;
> + I915_WRITE(GAMMA_MODE(pipe), mode | GAMMA_MODE_MODE_SPLIT);
> + DRM_DEBUG_DRIVER("degamma correction enabled on Pipe %c\n",
> + pipe_name(pipe));
> + break;
Ouch, so it's 8-bit or split, nothing else. Again, that's going to be
pretty hard for generic userspace to figure out.
> +static uint32_t bdw_prepare_csc_coeff(int64_t coeff)
CHV is using s64 rather than int64_t here.
> +#define BDW_MAX_GAMMA ((1 << 24) - 1)
Well, at least this is consistent with CHV.
Cheers,
Daniel
More information about the Intel-gfx
mailing list