[RFC wayland-protocols v2 1/1] Add the color-management protocol

Pekka Paalanen ppaalanen at gmail.com
Fri Mar 1 11:47:42 UTC 2019


On Thu, 28 Feb 2019 21:05:13 -0700
Chris Murphy <lists at colorremedies.com> wrote:

> On Thu, Feb 28, 2019 at 4:37 AM Pekka Paalanen <ppaalanen at gmail.com> wrote:
> >
> > another thought about a compositor implementation detail I would like
> > to ask you all is about the blending space.
> >
> > If the compositor blending space was CIE XYZ with direct (linear)
> > encoding to IEEE754 32-bit float values in pixels, with the units of Y
> > chosen to match an absolute physical luminance value (or something that
> > corresponds with the HDR specifications), would that be sufficient for
> > all imaginable and realistic color reproduction purposes, HDR included?  
> 
> CIE XYZ doesn't really have per se limits. It's always possible to
> just add more photons, even if things start catching fire.
> 
> You can pick sRGB/Rec.709 primaries and define points inside or
> outside those primaries, with 32-bit FP precision. This was the
> rationalization used in the scRGB color space.
> https://en.wikipedia.org/wiki/ScRGB
> 
> openEXR assumes Rec.709 primaries if not specified, but quite a bit
> more dynamic range than scRGB.
> http://www.openexr.com/documentation/TechnicalIntroduction.pdf
> http://www.openexr.com/documentation/OpenEXRColorManagement.pdf

Hi Chris,

I see, there are much more convenient color spaces to be chosen as the
one internal blending space than my overkill choice of CIE XYZ, that is
very good. Do I understand right that with a suitable value encoding
they can cover any HDR and any gamut?

Thanks for the references!

> An advantage to starting out with constraint, you can much more easily
> implement lower precision levels, like 16bpc float or even integer.
> 
> > Or do I have false assumptions about HDR specifications and they do
> > not define brightness in physical absolute units but somehow in
> > relative units? I think I saw "nit" as the unit somewhere which is an
> > absolute physical unit.  
> 
> It depends on which part of specifications you're looking at. The
> reference environment, and reference medium are definitely defined in
> absolute terms. The term "nit" is the same thing as the candela per
> square meter (cd/m^2), and that's the unit for luminance. Display
> black luminance and white luminance use this unit. The environment
> will use the SI unit lux. The nit is used for projected light, and lux
> used for light incident to or emitted from a surface (ceiling, walls,
> floor, etc).
> 
> In the SDR world including an ICCv4 world, the display class profile
> uses relative values: lightness. Not luminance. Even when encoding
> XYZ, the values are all relative to that display's white, where Y =
> 1.0. So yeah for HDR that information is useless and is one of the
> gotchas with ICC display class profiles. There are optional tags
> defined in the spec for many years now to include measured display
> black and white luminance. For HDR applications it would seem it'd
> have to be required information. Another gotcha that has been mostly
> sorted out I think, is whether the measurements are so called
> "contact" or "no contact" measurements, i.e. a contact measurement
> won't account for veiling glare, which is the effect of ambient light
> reflecting off the surface of the display thereby increasing the
> effective display's black luminance. A no contact measurement will
> account for it. You might think, the no contact measurement is better.
> Well, yeah, maybe in a production environment where everything is
> measured and stabilized.
> 
> But in a home, you might actually want to estimate veiling glare and
> apply it to a no contact display black luminance measurement. Maybe
> you have a setting in a player with simple ambient descriptors as
> "dark" "moderate" "bright" amounts of ambient condition. The choices
> made for handling HDR content in such a case are rather substantially
> different. And if this could be done by polling an inexpensive sensor
> in the environment, for example a camera on the display, so much the
> better. Maybe.
> 
> > It might be heavy to use, both storage wise and computationally, but I
> > think Weston should start with a gold standard approach that we can
> > verify to be correct, encode the behaviour into the test suite, and
> > then look at possible optimizations by looking at e.g. other blending
> > spaces or opportunistically skipping the blending space.
> >
> > Would that color space work universally from the colorimetry and
> > precision perspective, with any kind of gamut one might want/have, and
> > so on?  
> 
> The compositor is doing what kind of blending for what purpose? I'd
> expect any professional video rendering software will do this in their
> own defined color space, encoding, and precision - and it all happens
> internally. It might be a nice API so that applications don't have to
> keep reinventing that particular wheel and doing it internally.

The compositor needs to blend several windows into a single
framebuffer. Granted, we can often assume that 99% or more of the
pixels are opaque so not actually blended, but if we can define the
protocol and make the reference implementation such that even blending
according to the pixels' alpha values will be correct, I think that is
a worthwhile goal.

Giving applications a blending API is a non-goal here, the windows to
be blended together can be assumed to originate from different
applications that do not know of each other. The blending done by the
compositor is for the user's personal viewing pleasure. Applications
are not able to retrieve the blending result even for their own windows.

Sebastian's protocol proposal includes render intent from applications.
Conversion of client content to the blending space should ideally be
lossless, so the render intent in that step should be irrelevant if I
understand right. How to deal with render intent when converting from
blending space to output space is not clear to me, since different
windows may have different intents. Using the window's intent for the
window's pixels works only if the pixel in the framebuffer comes from
exactly one window and not more.

> In the near term do you really expect you need blending beyond
> Rec.2020/Rec.2100? Rec.2020/Rec.2100 is not so big that transforms to
> Rec.709 will require special gamut mapping consideration. But I'm open
> to other ideas.

That's my problem: I don't know what we need, hence I'm asking. My
first guess was to start with something that is able to express
everything in the world, more or less. If we know that the chosen color
space, encoding etc. have no limitations of their own, we can build a
reference implementation that should always produce the correct
results. Once we have a system that is correct, we can ensure with a
test suite that the results are unchanged when we start optimizing and
changing color spaces and encodings.

We need some hand-crafted, manually verified tests to check that the
reference implentation works correctly. Once it does, we can generate a
huge bunch of more tests to ensure the same inputs keep producing the
same outputs. Pixman uses this methodology in its test suite.

> Blender, DaVinci, Lightworks, GIMP or GEGL, and Darktable folks might
> have some input here.
> 

Thank you very much for the insights,
pq
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/wayland-devel/attachments/20190301/b1ae7142/attachment.sig>


More information about the wayland-devel mailing list