[RFC PATCH v2 0/1] Color manager calibration protocol

Erwin Burema e.burema at gmail.com
Wed May 22 03:17:51 UTC 2019


On Tue, 21 May 2019 at 21:07, Sebastian Wick
<sebastian at sebastianwick.net> wrote:
>
> On 2019-05-21 19:32, Erwin Burema wrote:
> > Hi,
> >
> > This is the second version of my color manager calibration protocol,
> > biggest change is that I now use a surface this surface is set up in a
> > similar way to the touch screen calibration protocol. Another big
> > change is more (and hopefully better) documentation and the last big
> > one is a new way to set the calibration curve (plus some info needed
> > to use it). There are some other smaller changes many regarding
> > formatting.
> >
> > One thing I am not entirely satisfied with is the way I am setting the
> > calibration curve but decided to not wait for a brilliant idea to
> > strike me out of the blue and put it out here so more eyeballs can
> > have a look. And more brains can think about it.
>
> Hi Erwin,
>
> the approach still doesn't make sense to me. Why expose this specific
> part of the color pipeline, the VCGT, to the client? What is the
> advantage over simply passing e.g. a double to the compositor and the
> compositor then does the best to display that value at the highest
> precision it can.
>
> In previous discussions there was two arguments:
> 1. the VCGT might have higher precision than the frame buffer
> 2. so you can measure the thing you later actually use to show
>
> None of them seem to be valid. The compositor should know how to get the
> best precision out of the whole pipeline. The latter argument ignored
> that the compositor can use combinations of framebuffers, shaders,
> plane/crtc gamma, csc and degamma properties all with different
> precision for different parts of the screen in normal operation. And who
> knows, maybe we'll get more sophisticated scanout hardware.
>
> So what's the rationale behind this?
>

Hi,

Good question and to be honest I was at the point to remove the
bitdepth information due to the above arguments when Graeme Gill
replied with the following (direct quote)

"""

For accurate measurement any quantization of the color values
needs to be known, as well as the effective display depth.

The effective display depth may be different to the frame buffer
depth if the frame buffer output passes through per channel
lookup tables. Ideally the measurement would be at the
pixel depth resolution of the output of the per channel tables.
Naturally to make use of this, access to the per channel
tables contents is necessary, to setup the mapping
from the frame buffer value to the value sent to the
display.

"""

So since the quantization needs to be known anyway apparently and we
need access to the gamma tables (irregardless if those are implemented
as shaders or not) I though this was the "best" solution.

Hope this answers your question!

> > For those wondering why this is needed see here:
> > https://www.argyllcms.com/doc/monitorcontrols.html although with the
> > color manager protocol the first reason becomes moot (although it
> > wouldn't surprise me if a compositor would only implements this
> > protocol and not that one for reasons of simplicity or memory/disk
> > space savings) the other 2 are still valid.
> >
> > If this is seen as acceptable I will see if I can implement it in
> > Weston, although seeing as my day job is something completely
> > different (doing this as a hobbyist) and I am actually a physicist not
> > a software engineer I will probably need some help with that.
> >
> >
> > Erwin Burema (1):
> >   Adding color calibration protocol
> >
> >  .../wp_wayland_calibration.xml                | 247 ++++++++++++++++++
> >  1 file changed, 247 insertions(+)
> >  create mode 100644
> > unstable/color_calibration/wp_wayland_calibration.xml
>


More information about the wayland-devel mailing list