HDR support in Wayland/Weston

Graeme Gill graeme2 at argyllcms.com
Fri Jan 18 08:02:01 UTC 2019

Pekka Paalanen wrote:


> If a wl_surface straddles multiple outputs simultaneously, then
> wl_surface.enter/leave events indicate the surface being on all those
> outputs at the same time. The client is expected to take all the
> entered outputs into consideration when it chooses how to render its
> image. For HiDPI, this usually means taking the maximum output scale
> from that set of outputs. The compositor will then automatically
> (down)scale for the other outputs accordingly.

right, I was guessing this is how it works. So an obvious heuristic
for HiDPI is to render to the highest density output, on the basis
that the compositor down-sampling is the least detrimental to quality.

A wrinkle in using the same idea for Color Management is that such
a heuristic may not be as clear. For the case of a window being moved from
one output to another, then either a last to leave or first to enter
might make sense. If (as I presume) surfaces are multiply mapped
to achieve mirroring or similar (picture in picture ?), then I'm not so
sure how the most important display can be determined heuristically.
Even in the case of a projector, I could imagine under some circumstances
the projected color accuracy is secondary, and other circumstances it is
most important.

> This scheme also means that the compositor does not necessarily need to
> wait for a client to render when the outputs suddenly change. It knows
> how to transform the existing image for new outputs already. The visual
> quality may jump afterwards when the client catches up, but there is no
> window blinking in and out of existence.

Right, I took this as a particular design aim in Wayland.

> Yes and no. Yes, we do and should let clients know what kind of outputs
> their contents will be shown on. However, we will in any case need the
> compositor to be able to do the full and correct conversion from what
> ever the client has submitted to what is correct for a specific output,
> because nothing guarantees that those two would always match.

I don't think that's technically possible, because it's analogous to
Wayland taking on the rendering. But I also don't think it's a necessary
either, because as long as the client can do it's rendering for the
output that is most quality sensitive, then it's enough that the compositor
can do a less perfect transform between the display or colorspace that was
rendered to and the display it is actually appearing on. This is a much simpler
proposition than full general color management since you can assume that
all buffers are display-like and only have 3 channels, and that simple
to define color transform intents will be sufficient. Similarly to the HiDPI
case, the visual quality may jump slightly if the surface is re-rendered
for its actual output, but on most displays the difference shouldn't terribly
obvious unless you were looking for it.

Some nice properties of this approach are that it provides the usual
mechanism for an application (like a Color Management profiling app.)
to set device values using the null transform trick (setting source
colorspace the same as destination. If the two profiles are the same,
color conversion should be skipped by the CMM/compositor.)
A second property is that it would be possible to set a default
source colorspace for clients that don't know about color management
(i.e. sRGB). This allows a "color managed desktop" using existing window
managers and applications, solving the problems of using wide gamut or HDR
displays running in their full mode.

> This makes no use of the monitor HDR capability, but it would start
> with the most important feature: backward compatibility. Adding the
> Wayland protocol to allow clients to reasonably produce and submit HDR
> content could come after this as a next phase.

Yes, that would be a good thing to get out of it.

> I'm not sure I understand what you mean. Above you seemed to agree with
> this. Maybe I should emphasise the "if it has to"? That is, when for
> whatever reason, the client provided content is not directly suitable
> for the output it will be displayed on.

OK, a detail I seem to be having difficulty conveying is the
difference between doing a bulk conversion between two colorspaces,
and rendering in a color aware fashion. Example :- consider rendering
a document stored in a sophisticated page description language
such as PDF, PS or XPF. These are color aware formats, and in general
pages may be composed of multiple different graphic elements that can
have their color described in a multitude of different ways. Some may
be in device dependent spaces such as RGB. Some may be in CMYK.
Some may be in device independent CIE based spaces such as L*a*b*.
Some may be spot colors. Each may be specified to be rendered with
specific color transform intents. For instance, a screen capture image may
be specified in RGB and rendered with a Relative Colorimetric Intent,
where the white points are matched and colors otherwise directly mapped
and out of gamut colors clipped. A flow chart may be specified in RGB
as well, but rendered to the display space using Saturation Intent,
where white points are matched and the source gamut expanded to fill
the display gamut, with some possible saturation enhancement. Other
elements could be packaging sample colors specified as Pantone
spot colors. Maybe these would be looked up in a named spot color library,
and converted to the display colorspace using Absolute Colorimetric
rendering, where white points are not matched, so that they bear the
closest direct match to the swatch colors. Yet another element may
be a photograph, and for this maybe a Perceptual style intent is desired,
where white points are matched and source colors that are out of gamut
for the display are mapped to in-gamut colors, and other colors in between
are mapped proportionately, maintaining color graduation rather than
leaving clipped regions. Apart from the wide variety of source
colorspaces being blended together on an element by element,
pixel by pixel basis, and the variety of color conversion styles
that might be used, good quality gamut mappings need both
source and destination colorspace gamuts to be known at
the time the conversion is defined. So one processing element
or the other needs to do this :- either the compositor or the client
application. You can't split this up without compromising color
quality. Consequently I regard the following two approaches as undesirable:

1) Client renders to some nominal colorspace and then
   the compositor color converts from that to the actual
   display space. Drawback: the client rendering can't honor
   the intents correctly, because it does not know the actual
   display colorspace. Mixing absolute and relative white point
   handling would be difficult. (I'm not sure it would even
   be possible if you don't make white point assumptions about
   the displays.) Note that existing Color Managed systems
   such as MSWindows, OS X, X11 and even (lately) Android don't
   do it this way.

2) Client hands the compositor all the color conversions. Even
   if the client does a lot of the setup work and hands it
   pre-cooked conversions (device links) for all possible display spaces
   (and this would be unnecessarily slow to pre-compute, and
    should really be computed on demand by a callback from
    compositor to client), the application still needs to
   somehow break down all the rendering into chunks that
   the compositor can execute. The compositor is expected
   to add a lot of complexity (colorspaces with up to 15 channels),
   and the application needs a lot of modifications to break
   down all its rendering into those chunks.

A better approach I think is the hybrid one we were talking
about: Give the client enough information to decide which display
it should optimize color rendering for. When the compositor needs
to display the surface on some other display, it can use a simpler
bulk color conversion to do so. Optimal color rendering can at least
be achieved on one display (hopefully enough to satisfy the demanding
color user), while still allowing the compositor to handle
window transitions, mirroring etc. without requiring huge
re-writes of applications. This is the analogy to current HiDPI handling.

> Yes, a compositor must implement all that, but this is now slipping to
> the topic of calibration, which is very much off-scope for today. We
> only want to consider how applications produce and provide content with
> specific color properties for now.

It's a necessary part of the picture. There's not much point in
moving ahead with Color Management support if there is no
easy means of creating display profiles to populate it with. So in
terms of practical implementation I see them going hand in hand.

	Graeme Gill.

More information about the wayland-devel mailing list