HDR support in Wayland/Weston

Chris Murphy lists at colorremedies.com
Mon Mar 4 21:51:31 UTC 2019

On Mon, Mar 4, 2019 at 3:20 AM Pekka Paalanen <ppaalanen at gmail.com> wrote:
> X11 apps have (hopefully) done hardware LUT manipulation only through
> X11. There was no other way AFAIK, unless the app started essentially
> hacking the system via root privileges.
> The current DRM KMS kernel design has always had a "lock" (the DRM
> master concept) so that if one process (e.g. the X server) is in
> control of the display, then no other process can touch the display
> hardware at the same time.
> Before DRM KMS, the video mode programming was in the Xorg video
> drivers, so if an app wanted to bypass the X server, it would have had
> to poke the hardware "directly" bypassing any drivers, which is even
> more horrible that it might sound.

Sounds pretty horrible. Anyway, I'm definitely a fan of the answer to
the question found here: http://www.islinuxaboutchoice.com/

It sounds like legacy applications use XWayland, and in some edge case
request for a literal video hardware LUT, this could be some kind of
surface for that app's windows. That seems sane to me. A way to make
such computations almost free is important to them. I think they only
ever cared about doing it with a hardware LUT because it required no
CPU or GPU time. In really ancient times, display compensation (i.e.
do a transform from sRGB to mydisplayRGB to compensate for the fact my
display is not really an sRGB display) performance was variable. A few
companies figured out a way to do this really cheaply, even Apple had
a way to apply a non-LUT, lower quality profile, to do display
compensation with live Quicktime video, over 20 years ago. Meanwhile,
one of the arguments the Mozilla Firefox folks had for moving away
from lcms2 in favor of qcms was performance, but even that wasn't good
enough performance wise for always on display compensation. I still
don't know why, other than I recognize imaging pipelines are
complicated and really hard work.

Also in that era, before OS X, was configurable transform quality
performance settings: fast, good, best - or something like that. For
all I know, best is just as cheap these days as fast and you don't
need to distinguish such things. But if you did, I think historic
evidence shows only fast and best matter. Fast might have meant taking
a LUT display profile and describing the TRC with a gamma function
instead, or 4 bits per channel instead of 8, and 8 instead of 16.
These days I've heard there are hardware optimizations for floating
point that makes it pointless to do integer as a performance saving

Back then we were really worried about get a display the "correct 8
bits per channel" since that was the pipeline we had, any video
hardware LUT for calibration took away bits from that pipeline.

And that's gotten quite a lot easier these days because at the not
even super high end, there are commodity displays that are calibrated
internally and supply a minimum 8 bits per channel, often now 10 bits
per channel, pipeline. On those displays, we don't even worry about
calibration on the desktop. And that means the high end you get almost
for free from an application standpoint. The thing to worry about are
shitty laptop displays which might have 8 bit per channel addressible
but aren't in any sense really giving us that much precision, it might
be 6 or 7. And there's a bunch of abstraction baked into the panel
that you have no control over that limits this further. So you kinda
have to be careful about doing something seemingly rudimentary like
changing its white point from D75 to D55. Hilariously, it can be the
crap displays that'll cause the most grief, not the high end use case.

OK so how do you deal with that? Well, it might in fact be that you
don't want to force accuracy, but rather re-render with a purpose to
make it look as decent as possible on that display even if the
transforms you're doing aren't achieving a colorimetric ally accurate
result that you can measure, but do achieve a pleasing result. I know
it's funny - how the frak do we distinguish between these use cases?
And then what happens if you have what seems to be a mixed case of a
higher end self-calibrating display connected to a laptop with a
shitty display? Haha, yeah you can be fakaked almost no matter what
choices you make. That's my typical setup, by the way.

> > Even if it turns out the application tags its content with displayRGB,
> > thereby in effect getting a null transform, (or a null transform with
> > whatever quantization happens through 32bpc float intermediate color
> > image encoding), that's functionally a do not color manage deviceRGB
> > path.
> What is "displayRGB"? Does it essentially mean "write these pixel
> values to any monitor as is"? What if the channel value's data type
> does not match?

Good question. I use 'displayRGB' as generic shorthand for the display
profile, which is different on every system. On my system right now
it's /home/chris/.local/share/icc/edid-388f82e68786f1c5ac552f0b4d0c945f.icc
but it's something else on your system. I call them both displayRGB to
mean whatever profile happens to be set as the profile for the
display, assuming that because everyone has different displays, that
displayRGB is not any one thing.

A simple transform equation is described as:

source color space -> destination color space

The source might be a camera or a display or a standard RGB space like
sRGB. The destination might be a display or printer. So let's say
displayRGB' means a specific color space in an example system:

source color space = displayRGB'
destination color space = displayRGB'

displayRGB' -> displayRGB'

The RGB's are already in the same space so a transform isn't
necessary, the RGBs stay the same, and an optimization is to do a null

A single profile does not imply a transform. It does imply defining
device color values in terms of either CIE XYZ or CIE L*a*b*.

> I suppose if a compositor receives content with "displayRGB" profile,
> assuming my guess above is correct, it would have to apply the inverse
> of the blending space to output space transform first, so that the
> total result would be a null transform for pixels that were not blended
> with anything else.

If you don't have a way to check that source color space = destination
color space, and optimize away the unnecessary transform that's
implied by doing a null transform, and you literally do the transforms
from source->intermediate->destination-> then you get quantization
error, but otherwise you should end up with essentially identical

Quite a lot of color management transform engines look for such
source=destination and optimize them away, hence null transform. It's
a really common transform that comes up all the time. This null
transform can be done by policy (ignore all profiles!) or it can be
done programmatically by recognizing the need for a null transform
prior to doing the actual transform work.

Chris Murphy

More information about the wayland-devel mailing list