Monitor profiling (Re: [RFC wayland-protocols] Color management protocol)

Chris Murphy lists at colorremedies.com
Fri Jan 13 19:20:33 UTC 2017


On Fri, Jan 13, 2017 at 7:17 AM, Pekka Paalanen <ppaalanen at gmail.com> wrote:

>
> If I understand right, the calibrating or monitor profiling process
> (are these the same thing?) needs to control the "raw" pixel values
> going through the encoder/connector (DRM terminology), hence you need
> access to the /last/ VideoLUT in the pipeline before the monitor. Right?
> Or not even a VideoLUT per se, you just want to control the values to
> the full precision the hardware has.

Calibration and characterization (profiling) are two distinctly
different things, that from a user perspective are blurred into single
event that's usually just called "display calibration" done by a
single program.

The software first linearizes the videoLUT (sounds like maybe there's
more than one in the hardware, but even from my technical perspective
we're only talking about such thing and I lack the hardware knowledge
to differentiate), then displays RGB test values while measuring their
response (photometer, colorimeter, spectroradiometer). Then a
correction curve is created by the software and applied to the
VideoLUT. This is calibration.

Next the software displays more RGB test values, subject to that video
LUT's correction, while measuring their response with either a
colorimeter or spectroradiometer. The software creates an ICC profile
from these measurements. This is characterization.


> How does the profiling work? I mean, what kind of patterns do you show
> on the monitor? All pixels always a uniform value? Or just some varying
> areas? Individual pixels? Patterns that are not of uniform color?

The test pattern needs to be large enough for the user to get the
measuring device aperture over the pattern without any ambiguity. The
test pattern is made of identical RGB values - although whether
dithering is used by the panel itself isn't something the software can
control, but is taken into account by the measuring device the same as
our visual system would.

The minimum test is black, white, each primary, and some number of
intermediate values of each channel to determine the tone response
curve (incorrectly called gamma but the shape of the curve could be
defined by a gamma function, or parametric function, or a table with
points). But each profiling software can do different things. There
are simple matrix + TRC only display profiles. And there are full 3D
LUT display profiles; these need more measurements, including more
than just measuring primaries - these include measurements of
secondary and tertiary colors.

Nearby to this test pattern, often there's some sort of status
indicator so the user has some idea the process hasn't stalled,
sometimes also including the RGB values being displayed and their
respective XYZ or Lab measured values (to some degree for
entertainment value I guess).

The ICC profile is used to do various transforms, e.g. CMYK to display
RGB, sRGB/Rec 709 to display RGB, etc. which is what's meant by
"display compensation" so the display produces colors as if it has a
behavior other than it's natural behavior. Those transforms are done
by ICC aware applications using a library such as lcms2. So whatever
pipeline is used for "calibration" needs to produce identical results
to the pipeline used by the application - otherwise all bets are off
and we'll get all kinds of crazy bugs that we will have no good way of
troubleshooting. In fact I'd consider it typical for me to display
sRGB 255,0,0 in say GIMP, and measure it with a colorimeter, and make
sure the XYZ values I get are identical to what the display ICC
profile says they should be. If not, I'm basically hosed. And I've
seen exactly this kind of bug before on every platform I've ever
tested and it's tedious to figure out.

Apple has a great little tool called Digital Color Meter that shows
the pixels on screen (digitally enlarged) so I can see what RGB values
it's sending to the display (these are post ICC transformed values,
but have not been additionally transformed by the video LUT, so in
reality these RGBs are not arriving at the panel but the video LUT
calibration + display are widely considered to be from the user and
even expert perspective to be one thing once the calibration is done).



>
> I would argue that it is much easier to make the above work reliably
> than craft a buffer of pixels filled with certain values, then tell the
> compositor to program the hardware to (not) mangle the values in a
> certain way, and assume the output is something you wanted. The
> application would not even know what manipulation stages the compositor
> and the hardware might have for the pixels, so you would still need a
> protocol to say "I want everything to be identity except for the last
> LUT in the pipeline". IMO that is a hell of a hard way of saying
> "output this value to the monitor".

OK. The only problem I see with two pipelines where one is described
as "much easier to make reliable" sounds like the other pipeline may
not be reliable, but that is the pipeline 99.9% of the colors we care
about are going through, which are the colors from all of our
applications.

So instead of testing one pipeline, we're going to have to test two
pipelines, with software and measuring devices, to make certain they
are in fact behaving the same. I'm not really sure what the advantage
is of two pipelines is, in this context.



-- 
Chris Murphy


More information about the wayland-devel mailing list