[RFC wayland-protocols] Color management protocol
Niels Ole Salscheider
niels_ole at salscheider-online.de
Wed Dec 21 13:24:36 UTC 2016
On Wednesday, 21 December 2016, 12:05:12 CET, Daniel Stone wrote:
> Hi Chris,
>
> On 20 December 2016 at 20:49, Chris Murphy <lists at colorremedies.com> wrote:
> > On Tue, Dec 20, 2016 at 11:33 AM, Daniel Stone <daniel at fooishbar.org>
wrote:
> >> On 20 December 2016 at 18:11, Chris Murphy <lists at colorremedies.com>
wrote:
> >>> We can't have multiple white points on the display at the same time;
> >>> it causes incomplete user adaptation and breaks color matching
> >>> everywhere in the workflow. The traditional way to make sure there is
> >>> only one white point for all programs is manipulating the video card
> >>> LUTs. It's probably not the only way to do it. But if it's definitely
> >>> not possible (for reasons I'm not really following) for a privileged
> >>> display calibration program to inject LUT data, and restore it at boot
> >>> up time as well as wake from sleep, then another way to make certain
> >>> there's normalized color rendering on the display is necessary.
> >>
> >> Right: 'ensure whitepoint consistency' is an entirely worthwhile goal,
> >> and I think everyone agrees on the point. 'Give clients absolute
> >> control over display hardware LUT + CTM' is one way of achieving that
> >> goal, but not a first-order goal in itself.
> >>
> >> The reason applications can't drive the LUT is because, as you say,
> >> there's not always just one. If there were only one application, then
> >> why not just write directly to KMS? There's little call for a window
> >> system in that case.
> >
> > At least on Windows and macOS there is a significant dependence on
> > "the honor system" that applications don't go messing around with
> > video card LUTs unless they happen to be specifically a display
> > calibration program. It's not in the interest of these applications to
> > change the video card LUT. The way it works on macOS and Windows, only
> > a display calibration program will do this: measures the display with
> > a linear video card LUT, computes a correction curve to apply to the
> > LUT, then measures the display again with the LUT in place, creates an
> > ICC profile from that 2nd set of measurements, and registers the ICC
> > profile with the OS so programs can be made aware of it.
> >
> > If a program were to change the LUT, the display profile becomes
> > invalid. So it's not in the interest of any program that doesn't
> > calibrate displays to modify video card LUTs. OK so maybe that means
> > such a program is either a display calibrator, or it's malicious. I
> > haven't heard of such a malicious program, but that's also like blind
> > faith in aviation's "big sky theory". But totally proscribing programs
> > that depend on changing video card LUTs just to avoid the next to
> > impossible malware vector seems a bit heavy weight, in particular
> > given that an alternative (full display compensation) doesn't yet
> > exist.
>
> Understood. I'd perhaps misread some earlier discussions in the
> megathread (which I don't really wish to revisit) then; apologies.
>
> For the purposes of this discussion, I'd like to park the topic of
> calibration for now. It goes without saying that providing facilities
> for non-calibration clients is useless without calibration existing,
> but the two are surprisingly different from a window-system-mechanism
> point of view; different enough that my current thinking tends towards
> a total compositor bypass for calibration, and just having it drive
> DRM/KMS directly. I'd like to attack and bottom out the
> non-calibration usecase without muddying those waters, though ...
>
> >>> The holy grail is as Richard Hughes describes, late binding color
> >>> transforms. In effect every pixel that will go to a display is going
> >>> to be transformed. Every button, every bit of white text, for every
> >>> application. There is no such thing as opt in color management, the
> >>> dumbest program in the world will have its pixels intercepted and
> >>> transformed to make sure it does not really produce 255,255,255
> >>> (deviceRGB) white on the user display.
> >>
> >> I agree that there's 'no such thing as an opt in', and equally that
> >> there's no such thing as an opt out. Something is always going to do
> >> something to your content, and if it's not aware of what it's doing,
> >> that something is going to be destructive. For that reason, I'm deeply
> >> skeptical that the option is routing around the core infrastructure.
> >
> > There's ample evidence if it's easy to opt out, developers will do so.
> > It needs to be easier for them to just get it for free or nearly so,
> > with a layered approach where they get more functionality by doing
> > more work in their program, or using libraries that help them do that
> > work for them.
>
> I completely agree with you! Not only on the 'opt in' point quoted
> just here, but on this. When I said 'opt out' in my reply, I was
> talking about several proposals in the discussion that applications be
> able to 'opt out of colour management' because they already had the
> perfect pipeline created. That 'opt out' is not a thing that exists,
> because if the system is designed such that the compositor can only
> guess or infer colour properties, then it will get it wrong, and it
> will be destructive. That's not a system I'm interested in creating.
>
> > For example a still very common form of partial data loss is the dumb
> > program that can open a JPEG but ignore EXIF color space metadata and
> > an embedded ICC profile. What'd be nice is if the application doesn't
> > have to know how to handle this: reading that data, and then tagging
> > the image object with that color space metadata. Instead the
> > application should use some API that already knows how to read files,
> > knows what a JPEG is, knows all about the various metadata types and
> > their ordering rules, and that library is what does the display
> > request and knows it needs to attach the color space metadata. It's
> > really the application that needs routing around.
>
> I agree (the specific JPEG case is something that's been bugging me
> for some time), and it's something I want to build in to make it as
> easy as possible for people to gradually build their clients to get
> this right.
This is really something that should be done by the toolkits (Qt, GTK, ...).
I really hope that they start to read the profile from EXIF when opening an
image. They can then either attach it to the subsurface that is used to
display the image, or convert it to their blending space (which could match
the blending space of the compositor) if blending is performed.
> >> As arguments to support his solution, Graeme presents a number of
> >> cases such as complete perfect colour accuracy whilst dragging
> >> surfaces between multiple displays, and others which are deeply
> >> understandable coming from X11. The two systems are so vastly
> >> different in their rendering and display pipelines that, whilst the
> >> problems he raises are valid and worth solving, I think he is missing
> >> an endless long tail of problems with his decreed solution caused by
> >> the difference in processing pipelines.
> >>
> >> Put briefly, I don't believe it's possible to design a system which
> >> makes these guarantees, without the compositor being intimately aware
> >> of the detail.
> >
> > I'm not fussy about the architectural details. But there's a real
> > world need for multiple display support, and for images to look
> > correct across multiple displays, and for images to look correct when
> > split across displays. I'm aware there are limits how good this can
> > be, which is why the high end use cases involve self-calibrating
> > displays with their own high bit internal LUT, and the video card LUT
> > is set to linear tone response. In this case, the display color space
> > is identical across all display devices. Simple.
> >
> > But it's also a dependency on proprietary hardware solutions.
>
> Similarly, I'd like to park the discussion about surfaces split across
> multiple displays; it's a red herring. Again, in X11, your pixel
> content exists in one single flat buffer which is shared between
> displays. This is not a restriction we have in Wayland, and a lot of
> the discussion here has rat-holed on the specifics of how to achieve
> this based on assumptions from X11. It's entirely possible that the
> best solution to this (a problem shared with heterogeneous-DPI
> systems) is to provide multiple buffers. Or maybe, as you suggest
> below, normalised to an intermediate colour system of perhaps wider
> gamut. Either way, there's a lot of ways to attack it, but how we
> solve that is almost incidental to core colour-management design.
Has there been any discussion about using a buffer per output to solve the
heterogeneous-DPI problem? If we end up doing that we might as well use it for
color correction. But otherwise I would prefer the device link profile
solution.
> I've snipped a few paragraphs below since I continue to completely
> agree with you.
>
> >> It isn't a panacea though. It is one way to attack things, but if your
> >> foundational principle is that the display hardware LUT/CTM never be
> >> out of sync with content presented to the display, then my view is
> >> that the best way to solve this is by having the LUT/CTM be driven by
> >> the thing which controls presentation to the display. Meaning, the
> >> application is written directly to KMS and is responsible for both, or
> >> the application provides enough information to the compositor to allow
> >> it to do the same thing in an ideal application-centric steady state,
> >> but also cope with other scenarios. Say, if your app crashes and you
> >> show the desktop, or you accidentally press Alt-Tab, or you get a
> >> desktop notification, or your screensaver kicks in, or, or, or ...
> >
> > The video card LUT is a fast transform. It applies to video playback
> > the same as anything else, and has no performance penalty.
> >
> > So then I wonder where the real performance penalty is these days?
> > Video card LUT is a simplistic 2D transform. Maybe the "thing" that
> > ultimately pushes pixels to each display, can push those pixels
> > through a software 2D LUT instead of the hardware one, and do it on 10
> > bits per channel rather than on full bit data.
>
> Some of the LUTs/matrices in display controllers (see a partial
> enumeration in reply to Mattias) can already handle wide-gamut colour,
> with caveats. Sometimes they will be perfectly appropriate to use, and
> sometimes the lack of granularity will destroy much of their value. If
> the compositor is using the GPU for composition, then doing colour
> transformations is extremely cheap, because we're rarely bound on the
> GPU's ALU capacity.
Yes, but as Graeme has pointed out doing it in a shader means lower precision
when using a 8 bit framebuffer. How feasible is it to use a higher resolution
framebuffer and how big would the performance impact be?
> Mind you, I see an ideal steady state for non-alpha-blended
> colour-aware applications on a calibrated display, as involving no
> intermediate transformations other than a sufficiently capable display
> controller LUT/matrix. Efficiency is important, after all. But I think
> designing a system to the particular details of a subset of hardware
> capability today is unnecessarily limiting, and we'd be kicking
> ourselves further down the road if we did so.
>
> Cheers,
> Daniel
> _______________________________________________
> wayland-devel mailing list
> wayland-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/wayland-devel
More information about the wayland-devel
mailing list