[RFC wayland-protocols] Color management protocol

Chris Murphy lists at colorremedies.com
Tue Dec 20 20:49:53 UTC 2016


On Tue, Dec 20, 2016 at 11:33 AM, Daniel Stone <daniel at fooishbar.org> wrote:
> Hi Chris,
>
> On 20 December 2016 at 18:11, Chris Murphy <lists at colorremedies.com> wrote:
>> On Tue, Dec 20, 2016 at 10:02 AM, Daniel Stone <daniel at fooishbar.org> wrote:
>>> On 17 December 2016 at 10:16, Graeme Gill <graeme2 at argyllcms.com> wrote:
>>>> As I've explained a few times, and extension is needed to provide
>>>> the Output region information for each surface, as well as each
>>>> outputs color profile, as well as be able to set each Outputs
>>>> per channel VideoLUT tables for calibration.
>>>
>>> That's one way of looking at it, yes. But no, the exact thing you're
>>> describing will never occur for any reason. If you'd like to take a
>>> step back and explain your reasoning, as well as the alternate
>>> solutions you've discarded, then that's fine, but otherwise, with a
>>> firm and resolute 'no, never' to this point, we're at a dead end.
>>
>> We can't have multiple white points on the display at the same time;
>> it causes incomplete user adaptation and breaks color matching
>> everywhere in the workflow. The traditional way to make sure there is
>> only one white point for all programs is manipulating the video card
>> LUTs. It's probably not the only way to do it. But if it's definitely
>> not possible (for reasons I'm not really following) for a privileged
>> display calibration program to inject LUT data, and restore it at boot
>> up time as well as wake from sleep, then another way to make certain
>> there's normalized color rendering on the display is necessary.
>
> Right: 'ensure whitepoint consistency' is an entirely worthwhile goal,
> and I think everyone agrees on the point. 'Give clients absolute
> control over display hardware LUT + CTM' is one way of achieving that
> goal, but not a first-order goal in itself.
>
> The reason applications can't drive the LUT is because, as you say,
> there's not always just one. If there were only one application, then
> why not just write directly to KMS? There's little call for a window
> system in that case.

At least on Windows and macOS there is a significant dependence on
"the honor system" that applications don't go messing around with
video card LUTs unless they happen to be specifically a display
calibration program. It's not in the interest of these applications to
change the video card LUT. The way it works on macOS and Windows, only
a display calibration program will do this: measures the display with
a linear video card LUT, computes a correction curve to apply to the
LUT, then measures the display again with the LUT in place, creates an
ICC profile from that 2nd set of measurements, and registers the ICC
profile with the OS so programs can be made aware of it.

If a program were to change the LUT, the display profile becomes
invalid. So it's not in the interest of any program that doesn't
calibrate displays to modify video card LUTs. OK so maybe that means
such a program is either a display calibrator, or it's malicious. I
haven't heard of such a malicious program, but that's also like blind
faith in aviation's "big sky theory". But totally proscribing programs
that depend on changing video card LUTs just to avoid the next to
impossible malware vector seems a bit heavy weight, in particular
given that an alternative (full display compensation) doesn't yet
exist.



>
>> The holy grail is as Richard Hughes describes, late binding color
>> transforms. In effect every pixel that will go to a display is going
>> to be transformed. Every button, every bit of white text, for every
>> application. There is no such thing as opt in color management, the
>> dumbest program in the world will have its pixels intercepted and
>> transformed to make sure it does not really produce 255,255,255
>> (deviceRGB) white on the user display.
>
> I agree that there's 'no such thing as an opt in', and equally that
> there's no such thing as an opt out. Something is always going to do
> something to your content, and if it's not aware of what it's doing,
> that something is going to be destructive. For that reason, I'm deeply
> skeptical that the option is routing around the core infrastructure.

There's ample evidence if it's easy to opt out, developers will do so.
It needs to be easier for them to just get it for free or nearly so,
with a layered approach where they get more functionality by doing
more work in their program, or using libraries that help them do that
work for them.

For example a still very common form of partial data loss is the dumb
program that can open a JPEG but ignore EXIF color space metadata and
an embedded ICC profile. What'd be nice is if the application doesn't
have to know how to handle this: reading that data, and then tagging
the image object with that color space metadata. Instead the
application should use some API that already knows how to read files,
knows what a JPEG is, knows all about the various metadata types and
their ordering rules, and that library is what does the display
request and knows it needs to attach the color space metadata. It's
really the application that needs routing around.

Just to be clear, I should have said, "there should be no such thing
as opt in" because the reality of course is that's exactly what we do
have on X, and also on macOS and Windows. It's a bit less so on macOS
these days where Apple assumes sRGB almost everywhere as a source
space for untagged objects. Basically "deviceRGB" was mapped to sRGB,
there was no actual deviceRGB unless you used a special API intended
for things like games and display calibrators.



> As arguments to support his solution, Graeme presents a number of
> cases such as complete perfect colour accuracy whilst dragging
> surfaces between multiple displays, and others which are deeply
> understandable coming from X11. The two systems are so vastly
> different in their rendering and display pipelines that, whilst the
> problems he raises are valid and worth solving, I think he is missing
> an endless long tail of problems with his decreed solution caused by
> the difference in processing pipelines.
>
> Put briefly, I don't believe it's possible to design a system which
> makes these guarantees, without the compositor being intimately aware
> of the detail.

I'm not fussy about the architectural details. But there's a real
world need for multiple display support, and for images to look
correct across multiple displays, and for images to look correct when
split across displays. I'm aware there are limits how good this can
be, which is why the high end use cases involve self-calibrating
displays with their own high bit internal LUT, and the video card LUT
is set to linear tone response. In this case, the display color space
is identical across all display devices. Simple.

But it's also a dependency on proprietary hardware solutions.

What if all objects were normalized to some intermediate color space?
The gamut, bit depth, and tone response of this intermediate color
space is a separate question; to what degree any of those attributes
are user configurable is a separate question. The bottom line is, all
objects are either already in this intermediate space, or there's an
assumed color space for the object, or the object has metadata
associated with it about it's color space -> and all objects are
transformed into the intermediate space. Maybe this is something the
compositor does as a rather early step. The plus is that from that
point on, all objects are in this intermediate space and there's no
need to worry about additional transforms. It'd be an idealized
blending and composite space.

The "thing" that's ultimately responsible for pushing the actual
pixels to a display device would be the same thing that'd transform
pixels from this intermediate space to specific display RGB space for
a particular display. Everything in between really doesn't need to
know about much about color management, let alone a particular
implementation of it (i.e. ICC). That's stuff that can be at the front
end and back end, and leave the middle area free of the baggage. Yes,
this is two transforms, that's the negative side effect. And the
compositor is then pushing around a bunch of, likely wide gamut and
high bit depth pixels. But I'm not suggesting any of this is easy.
It's definitely a set of trade offs.


> This doesn't mean that every input pixel must be sRGB and the
> compositor / GPU / display hardware must transform every pixel, but
> that the compositor needs to be, at a minimum, aware. Again, I think
> everyone agrees that the perfect case for deeply colour-aware
> applications, is that they present content in such a way that, in the
> ideal steady state, it can be presented directly to the display with a
> NULL transform.

Sure it's essentially the same model on Windows and macOS since the
dawn of time. But they've had inordinate problems handling color
across multiple displays, and they both still depend on video card
LUTs to prevent non-aware apps from mucking up the workflow. It also
means each application ends up reinventing the wheel of color
management. The more there's a generic architecture they could all
leverage - either directly or via a library they could all share - the
less of this reinvention is necessary. And then we start seeing
applications formerly considered "dumb" as being much more easily able
to be deeply color aware, almost for free.


>> The consequences for a single dumb program, for even 2 seconds,
>> showing up on screen with 255,255,255 white on an uncalibrated display
>> is the user's white point adaptation is clobbered for at least 10
>> minutes, possibly 30 or more minutes. And it doesn't just affect the
>> other things they're viewing on the display, it will impact their
>> ability to reliably evaluate printed materials in the same
>> environment.
>>
>> So the traditional way of making absolutely certain no program can
>> hose the workflow is this crude lever in the video card. If you can
>> come up with an equivalently sure fire reliable s that doesn't demand
>> that the user draw up a list of "don't ever run these programs" while
>> doing color critical work, then great. Otherwise, there's going to
>> need to be a way to access the crude calibration lever in the video
>> card. Even though crude, this use case is exactly what it's designed
>> for.
>
> It isn't a panacea though. It is one way to attack things, but if your
> foundational principle is that the display hardware LUT/CTM never be
> out of sync with content presented to the display, then my view is
> that the best way to solve this is by having the LUT/CTM be driven by
> the thing which controls presentation to the display. Meaning, the
> application is written directly to KMS and is responsible for both, or
> the application provides enough information to the compositor to allow
> it to do the same thing in an ideal application-centric steady state,
> but also cope with other scenarios. Say, if your app crashes and you
> show the desktop, or you accidentally press Alt-Tab, or you get a
> desktop notification, or your screensaver kicks in, or, or, or ...


The video card LUT is a fast transform. It applies to video playback
the same as anything else, and has no performance penalty.

So then I wonder where the real performance penalty is these days?
Video card LUT is a simplistic 2D transform. Maybe the "thing" that
ultimately pushes pixels to each display, can push those pixels
through a software 2D LUT instead of the hardware one, and do it on 10
bits per channel rather than on full bit data.



> It seems to me like Niels's strawman was a reasonable foundation on
> which you could build a protocol which allowed this. Namely, in the
> ideal steady state, the app would have _effective_ control of the
> LUTs/CTM. But being mediated by a colour-aware compositor, the
> compositor would also be able to deal with states other than the
> application's perfect state, without destroying the user's whitepoint
> adaptation. It's difficult, but I think also a worthwhile goal.
>
> tl;dr we agree on goal but not implementation detail





-- 
Chris Murphy


More information about the wayland-devel mailing list