HDR support in Wayland/Weston

Pekka Paalanen ppaalanen at gmail.com
Mon Jan 21 11:56:29 UTC 2019


On Fri, 18 Jan 2019 19:02:01 +1100
Graeme Gill <graeme2 at argyllcms.com> wrote:

> Pekka Paalanen wrote:
> 
> Hi,
> 
> > If a wl_surface straddles multiple outputs simultaneously, then
> > wl_surface.enter/leave events indicate the surface being on all those
> > outputs at the same time. The client is expected to take all the
> > entered outputs into consideration when it chooses how to render its
> > image. For HiDPI, this usually means taking the maximum output scale
> > from that set of outputs. The compositor will then automatically
> > (down)scale for the other outputs accordingly.  
> 
> right, I was guessing this is how it works. So an obvious heuristic
> for HiDPI is to render to the highest density output, on the basis
> that the compositor down-sampling is the least detrimental to quality.
> 
> A wrinkle in using the same idea for Color Management is that such
> a heuristic may not be as clear. For the case of a window being moved from
> one output to another, then either a last to leave or first to enter
> might make sense. If (as I presume) surfaces are multiply mapped
> to achieve mirroring or similar (picture in picture ?), then I'm not so
> sure how the most important display can be determined heuristically.
> Even in the case of a projector, I could imagine under some circumstances
> the projected color accuracy is secondary, and other circumstances it is
> most important.

Hi Graeme,

yes, this is a good concern. I think we might be needing an extension
to tell the client which output to prioritise for. We already have an
analogue of that in the Presentation-time extension: a wl_surface's
timings can only be synchronised to one output at a time, so the
presentation feedback events tell the client which output it was
synchronised to on each frame.

I cannot imagine a case where the output for timings and the output for
color would be different, but I also probably would not re-use the
Presentation-time extension here. Instead, we would probably need to
handle it in the color related extensions explicitly.

> > Yes and no. Yes, we do and should let clients know what kind of outputs
> > their contents will be shown on. However, we will in any case need the
> > compositor to be able to do the full and correct conversion from what
> > ever the client has submitted to what is correct for a specific output,
> > because nothing guarantees that those two would always match.  
> 
> I don't think that's technically possible, because it's analogous to
> Wayland taking on the rendering.

Your explanation further below is very enlightening on what "rendering"
is.

> But I also don't think it's a necessary
> either, because as long as the client can do it's rendering for the
> output that is most quality sensitive, then it's enough that the compositor
> can do a less perfect transform between the display or colorspace that was
> rendered to and the display it is actually appearing on. This is a much simpler
> proposition than full general color management since you can assume that
> all buffers are display-like and only have 3 channels, and that simple
> to define color transform intents will be sufficient. Similarly to the HiDPI
> case, the visual quality may jump slightly if the surface is re-rendered
> for its actual output, but on most displays the difference shouldn't terribly
> obvious unless you were looking for it.

Yes, I think we found the same page here. :-)

> Some nice properties of this approach are that it provides the usual
> mechanism for an application (like a Color Management profiling app.)
> to set device values using the null transform trick (setting source
> colorspace the same as destination. If the two profiles are the same,
> color conversion should be skipped by the CMM/compositor.)
> A second property is that it would be possible to set a default
> source colorspace for clients that don't know about color management
> (i.e. sRGB). This allows a "color managed desktop" using existing window
> managers and applications, solving the problems of using wide gamut or HDR
> displays running in their full mode.

Yes, indeed.

> > I'm not sure I understand what you mean. Above you seemed to agree with
> > this. Maybe I should emphasise the "if it has to"? That is, when for
> > whatever reason, the client provided content is not directly suitable
> > for the output it will be displayed on.  
> 
> OK, a detail I seem to be having difficulty conveying is the
> difference between doing a bulk conversion between two colorspaces,
> and rendering in a color aware fashion. Example :- consider rendering
> a document stored in a sophisticated page description language
> such as PDF, PS or XPF. These are color aware formats, and in general
> pages may be composed of multiple different graphic elements that can
> have their color described in a multitude of different ways. Some may
> be in device dependent spaces such as RGB. Some may be in CMYK.
> Some may be in device independent CIE based spaces such as L*a*b*.
> Some may be spot colors. Each may be specified to be rendered with
> specific color transform intents. For instance, a screen capture image may
> be specified in RGB and rendered with a Relative Colorimetric Intent,
> where the white points are matched and colors otherwise directly mapped
> and out of gamut colors clipped. A flow chart may be specified in RGB
> as well, but rendered to the display space using Saturation Intent,
> where white points are matched and the source gamut expanded to fill
> the display gamut, with some possible saturation enhancement. Other
> elements could be packaging sample colors specified as Pantone
> spot colors. Maybe these would be looked up in a named spot color library,
> and converted to the display colorspace using Absolute Colorimetric
> rendering, where white points are not matched, so that they bear the
> closest direct match to the swatch colors. Yet another element may
> be a photograph, and for this maybe a Perceptual style intent is desired,
> where white points are matched and source colors that are out of gamut
> for the display are mapped to in-gamut colors, and other colors in between
> are mapped proportionately, maintaining color graduation rather than
> leaving clipped regions. Apart from the wide variety of source
> colorspaces being blended together on an element by element,
> pixel by pixel basis, and the variety of color conversion styles
> that might be used, good quality gamut mappings need both
> source and destination colorspace gamuts to be known at
> the time the conversion is defined.

The explanation of "intents" is an eye-opener to me. I guess the only
thing I had been imagining was the Absolute Colorimetric method - that
colors could be absolute. Thank you very much for explaining this.

> So one processing element
> or the other needs to do this :- either the compositor or the client
> application. You can't split this up without compromising color
> quality. Consequently I regard the following two approaches as undesirable:
> 
> 1) Client renders to some nominal colorspace and then
>    the compositor color converts from that to the actual
>    display space. Drawback: the client rendering can't honor
>    the intents correctly, because it does not know the actual
>    display colorspace. Mixing absolute and relative white point
>    handling would be difficult. (I'm not sure it would even
>    be possible if you don't make white point assumptions about
>    the displays.) Note that existing Color Managed systems
>    such as MSWindows, OS X, X11 and even (lately) Android don't
>    do it this way.
> 
> 2) Client hands the compositor all the color conversions. Even
>    if the client does a lot of the setup work and hands it
>    pre-cooked conversions (device links) for all possible display spaces
>    (and this would be unnecessarily slow to pre-compute, and
>     should really be computed on demand by a callback from
>     compositor to client), the application still needs to
>    somehow break down all the rendering into chunks that
>    the compositor can execute. The compositor is expected
>    to add a lot of complexity (colorspaces with up to 15 channels),
>    and the application needs a lot of modifications to break
>    down all its rendering into those chunks.

Yes, I think I finally see where you are coming from. That is all so
true, I completely agree.

> A better approach I think is the hybrid one we were talking
> about: Give the client enough information to decide which display
> it should optimize color rendering for. When the compositor needs
> to display the surface on some other display, it can use a simpler
> bulk color conversion to do so. Optimal color rendering can at least
> be achieved on one display (hopefully enough to satisfy the demanding
> color user), while still allowing the compositor to handle
> window transitions, mirroring etc. without requiring huge
> re-writes of applications. This is the analogy to current HiDPI handling.

Agreed.

> > Yes, a compositor must implement all that, but this is now slipping to
> > the topic of calibration, which is very much off-scope for today. We
> > only want to consider how applications produce and provide content with
> > specific color properties for now.  
> 
> It's a necessary part of the picture. There's not much point in
> moving ahead with Color Management support if there is no
> easy means of creating display profiles to populate it with. So in
> terms of practical implementation I see them going hand in hand.

They may go hand-in-hand in practise, but protocol-wise I still see
them as two completely orthogonal features, and we need to split the
work into manageable chunks anyway.

Surely the color characterisation of a monitor is not specific to a
window system? While we still don't have a good solution to
measurements in Wayland, people can measure their monitors on Xorg, or
even Windows, I hope. Or even better: using a measuring tool running
directly on DRM KMS, which ensures the tool gets absolute control over
the display hardware. DRM Leases could make such a tool usable under a
Wayland compositor, too, so you can have a control UI window as a
Wayland window on another output. People can take advantage of color
managed output already while the community figures out a good way to
handle measuring.

As an example of orthogonality, I would like to point to the new
touchscreen calibration tool for Weston. Commit adding the new tool
lists the benefits it has over the old calibration tool that was using
the usual input interfaces in Wayland:
https://gitlab.freedesktop.org/wayland/weston/commit/b79dead1dded6744d251da5357e14c83b91cef32

This commit added the protocol extension:
https://gitlab.freedesktop.org/wayland/weston/commit/999876a8f98791f92badd89b5b62481bf6dfcb4a

The reason the extension is specific to Weston is that there was no
interest in making it a standard, no-one else saw the need to calibrate
touchscreen input. It could still be standardised if the interest
arises.

Likewise, measuring the color characteristics of a monitor will likely
require a notion of intent to a Wayland compositor. Not only one wants
to be absolutely sure the compositor is not mangling the pixel data,
the calibration pattern must be shown on the correct monitor in the
correct place, even if some monitors were cloned. This is not usually
possible with the public desktop set of Wayland extensions. Touchscreen
calibration had the same problem with choosing the output and
positioning the pattern on screen.


Thanks,
pq
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/wayland-devel/attachments/20190121/ab01e419/attachment.sig>


More information about the wayland-devel mailing list