HDR support in Wayland/Weston

Graeme Gill graeme2 at argyllcms.com
Wed Feb 6 01:29:51 UTC 2019


Pekka Paalanen wrote:
> Graeme Gill <graeme2 at argyllcms.com> wrote:

Hi,

>> I don't have any basis to voice an opinion as to which particular protocol
>> these different aspects would best fall into (wayland core or one of the
>> supplementary xdg protocols ? - I'm not clear on what the purpose of these
>> divisions is), I just know that they are intimately related, and development
>> of one without the other will risk mis-steps. I don't really understand
>> why you think they are orthogonal.

> color management related extensions would be neither Wayland core (in
> wayland.xml file) nor under the XDG umbrella (XDG is specific to
> desktops while color is not specific to desktops).

I understand that color management would be a Wayland extension protocol.
I'm not clear on whether color management tool support is specific to
desktops or not though.

> The Wayland upstream cannot force anyone to implement any
> extension. The upstream can offer design help, reviews, discussion
> forum and a repository to host extensions, which should make extensions
> popular among client software and suitable for more than one
> compositor. The pressure for someone to implement an extension only
> ever comes from the community: people wanting new things to work.

Sure.

> Color management related extensions would be several different
> extensions, each catering to its own scope. The one we are talking
> about here is for clients to be able to reasonably provide color
> managed content and enable the correct displaying of it under automatic
> adaptation to outputs. Other color management related extensions could
> allow e.g. measuring the color profiles of monitors, to be discussed at
> another time.

I disagree. They are two side of the same coin. They may well
be described by separate (but closely related) protocols, but
they should be designed and implemented together. This minimizes
the total amount of work involved, and ensures cohesion.

> Being able to provide color managed content implies knowing the color
> properties of the used hardware and images. It does not imply being
> able to measure the color properties:

Color properties of the display have to come from somewhere. Less work
overall, and a more secure result if the are measured in the way they
are intended to be.

> you don't re-measure the color
> properties continuously while using an application. Or if you actually
> do, we can cater for that use case when designing a measurement
> extension.

Users may choose to perform display calibration, profiling or verification
at any time. It may be once a year, once a month or once a day. They
may need to do it right now, just before they start a critical project,
or perhaps because they have altered some display setting (brightness,
contrast, color temperature, colorspace emulation mode.)

> I don't understand. How is a color profile, or say, an .icc file or
> whatever you use to store color profiles, dependent on the window
> system? Do you mean you cannot use the same profile data files on
> Windows and Linux X11?

It's unsafe to assume so. They may be identical, or they may not.
The conservative assumption amongst color critical users is that they
are not. As a color technologists I suspect they are often very
similar. Anecdotally people have reported differences.
(In the days of CRT's, they almost certainly differed with
different screen resolutions. Probably less so with modern
display technologies.)

In any case, expecting a user to boot up an alternate
operating system to do a color profile is pretty big ask,
and not something that is currently expected, and few will
be open to such an expectation. They will instead switch to
a system that doesn't require this of them.

> Or do you mean that the window system implementation adds its own
> unknown effect to the pixel values?

It's possible. The conservative assumption is to assume this might be
the case. ("conservative" in the sense of not wanting the uncertainty,
or having to waste time verifying that they are in fact identical.)

> If that is so, then the measuring
> environment on those systems is fundamentally broken to begin with.

Not at all. Any such processing is part and parcel of the effective
display characteristic. It has to be profiled as well, to end up
with a profile that is valid for that system.

>> It's unsafe to switch pixel pipelines in the process of characterization,
>> since it risks measuring transformations that happen/don't happen in
>> one context, and not in the other. It's also not something that
>> end users will put up with - if a system claims to support color management,
>> then it includes the color management tools necessary to setup,
>> maintain and verify its operation on a day to day basis.
> 
> That seems like an unrealistic assumption, it kind of precludes the
> whole idea of interoperability. If your assumption is held, then you
> would need to make separate measurements for e.g. every single pixel
> format × color space an application is going to provide content to the
> compositor in, multiplied by all the different ways a compositor can
> make use of the graphics card hardware to produce the video signal on
> that single system, and you get to redo this every time you update any
> software component.

If the compositor materially altered the color rendering in these individual
cases, then yes, although in practice such a system would be unusable.

But no, that's not what I mean. I'm talking about more static differences
in how systems may be implemented or configured. Making assumptions
that systems with completely different implementations operate identically
in a color sense is a big leap of faith, that is typically rewarded with
frustration and disappointment.

> I would rather aim towards a design where you measure your monitor (and
> not the window system), and the compositor will take care of keeping
> that measured profile valid regardless of how the compositor chooses to
> use the hardware.

I don't know how you would do that with any certainty. Profiling
has to occur at the point in the processing where the profile
would be applied. The simplest and only certain way to do this
is to profile the system as it is implemented, not make assumptions
about other elements "doing the right thing". Experience shows that
they often don't.

> I would put responsibility to the compositor and
> hardware drivers to function correctly, instead of requiring every
> application needing to verify the whole system every time it starts up.

System implementation is assumed to remain constant between restarts. It's
at the users discretion as to when they will verify or profile.

> This will also leave compositor implementations more room to make the
> most out of the hardware at hand, rather than forcing them into a
> single specific pixel pipeline than can never change.

If the compositors has responsibility to "function correctly", then it has
little latitude for implementation variations. If instead profiling is
done as it should, through the implementation, then any deliberate or
accidental variations in the implementation will be accounted for in the
profile, and the implementation has some scope for more flexibility.

> Things will come piece by piece. Things will get tested and verified
> piece by piece. Bugs will be introduced and fixed one by one. Trying to
> design everything at once and implement everything at once makes no
> difference: people still work piece by piece.

Sure, but you need the two ends of each piece to test each piece.
Implementing the whole of one side of things before implementing
the other side needed to test it, makes testing of each piece
harder, less certain, and requires a lot of duplication of effort
(i.e. test code that would ultimately be thrown away.)

> I think it's better to acknowledge that people work piece by piece and
> things may have to change in the future once more pieces come together.

Maybe, but I think you are making far too big a deal over the
task. I don't see anything radical being contemplated here, just the
matter of how to best integrate the basic mechanisms into the way Wayland
works. It should be designed as a whole, in order to end up with
something that is cohesive. (i.e. it needs system design for best results, rather
than being invented ad-hock.) There is lots of other color managed
environments to refer to in picking good and bad approaches (X11, MSWindows, OS X,
Postscript, PDF, etc.).

> Wayland-protocols repository rules also have this workflow defined:
> extensions start as unstable, can be revised and changed radically,
> until they are agreed to be stable. We can just agree to not call color
> management related extensions stable until we have also measuring
> covered.

Sure.

> 100% similarity is not realistic in any case.

Repeatability is an assumption of practical color management.

> I totally agree on the usability of a measured color profile, which is
> why I would like it to describe the output device (a monitor in this
> case) and nothing else.

It doesn't work like that. The profile is applied at a point in
the pixel processing. For the profile to be valid, it has to
be based on the response of the display system at that
point. So profiling the display independently from the
rendering system the profile will be used in is both unnecessary work,
and a significant point of failure.

> You cannot assume the same software (you get updates, and other things
> change dynamically too) will always use the same setup.

But I can assume the same behavior of the software is the aim of
a system that is intended to suite the users purposes. An update
that contained changes that altered the color response of a system
would be disruptive, and would have to come with a bunch of
warnings, i.e. "You need to re-profile your display after applying
this update". Just like any other system update, this is not a situation
that would be created lightly by a responsible software development team.

> Wayland has a different paradigm that I have been trying to explain.
> 
> You said the measuring application needs to be able to show content
> unhindered by the compositor. I agree, and for that we need a Wayland
> extension taking into account the special details of measuring, e.g.
> how to position the color patch you want to physically measure and many
> many more aspects that an application cannot control using public
> unprivileged interfaces alone.

There isn't that much too it. Being able to install a profiles & calibration
curves (something the compositor needs at startup anyway). Being able to display
output at a particular location on a particular output (something that the
input calibration has had to tackle too.) Being able to label the color test values
in such a way that they don't have color management applied (something
that is part of the client color management protocol). That's about it.
Yes, some of these will require consideration of privilege levels.

> "Install a profile" into the compositor has these drawbacks the very
> least:
> 
> - "installing" is a global action, hence it means the application is
>   hijacking at least a whole output if not the whole compositor to
>   itself, this must be separate from the general application usage as it
>   needs to be privileged and exclusive

I'm not sure what you mean by "hijacking". Typically on systems installation
of display profiles is per-user, with system defaults being a privileged operation.
It certainly doesn't have much impact on other applications. A color
aware application might like to be notified that a color profile it has
downloaded from the compositor has changed, but non color managed
application can be blissfully ignorant. A compositor will probably want
to re-convert buffers that used the profile that was updated.

> - it assumes that the compositor works in a very specific way, that may
>   not match the hardware it has to work with, maybe avoiding the full
>   potential of the hardware

You'll have to give me an example, because I don't know what
you are thinking of.

> - the profile is for an output as a whole, which means that if you
>   installed a profile that includes effects from pixel formats and
>   source color spaces, you cannot show other applications on the same
>   output correctly

That's the nature of system provided color management. A client
that implicitly or explicitly uses the system facility gets
the plain vanilla fallback. Any application that wants to have
more control can do so, by doing its own conversions to the
specific display space. Profile installation is a users
choice, they are in control of what profiles they make, and
what ones they install. Defaults profiles should be designed to be useful.

> Yes, it works through and in cooperation with the compositor. It needs
> the compositor cooperation to be able to turn off the input coordinate
> transformation that the compositor normally does, and it needs to
> hijack that one input device completely. There is no other interface
> that would allow it to do that than the (Weston-specific) Wayland
> extension specifically designed for it.

An alternative approach would have been to have the calibration application
install a unity mapping that was applied just to it. It would
then get the raw coordinates, and could then install the calibrated
mapping. i.e. I'm not convinced that a special mode is actually needed
in the protocol design, apart from the facility to display graphics
at specific points on the display.

> Another advantage of the explicit special interface is that if the
> measurement application happens to crash, the compositor will restore
> normality automatically. The same happens if the user decides to
> forcefully quit the measurement, e.g. the measurement application is
> misbehaving: the compositor can offer an easy exit.

Right, but I would assume this is the case with any normal client.
A color profiling application doesn't set the display to any significantly
different condition to it's normal operation. I would assume that
temporarily installed profiles/calibration would be automatically un-installed
on the client quitting, along with any surfaces it is using etc.

> (Ever had a game
> change the video mode on X11 and then all your desktop icons are piled
> into the top-left corner, or the game crashes and leaves your monitor
> in low-resolution mode where you can't even see a desktop settings
> dialog in full? Automatic recovery and explicit action intents helped
> also those use cases.)

Right, but I'm not clear on why you think this type of danger exists
here. The process of calibrating and profiling doesn't explicitly disrupt
other applications - they keep running as normal. The color
may be disrupted slightly in the process of performing calibration,
and screensavers need to be suppressed when doing any long
series of measurements (analogous to the user wanting to watch a movie),
and a test window needs to be positioned above other windows.

Installation of profiles hasn't historically been a particular danger
point. Out of X11, OS X and MSWindows, only MSWindows does some
sanity checking of profiles when they are installed (it checks
that calibration curves are monotonic.) The worst that typically
occurs is that a user installs a bad profile, and on noticing
that it doesn't look right, reverts the change. More defensive
approaches are possible, perhaps along the lines of changing
screen resolution.

> Likewise, color measuring needs to make "the curves" linear like you
> say, and it needs to hijack one output completely. There is no
> interface that would allow the measurement application to do that, so
> one needs to be designed according to the Wayland paradigm.

Again, I don't know what you mean by "hijack". The effect of changing
calibration curves will depend on the compositor implementation.
If it uses the hardware LUTs, then changes will affect the whole
display. If it is implemented some other way, it may only affect the
test window. Either way, this is something that is being done at the
express wish of the user. They will be participating in the whole
process of placing the measurement instrument on the display etc., etc.

>> Quite the contrary - see above. If the compositor is mangling all the
>> pixel data, then it needs to mangled in exactly the same way while
>> profiling so that the response is the same and so that the profile
>> is valid for the compositor mangling pixel data that way. To have
>> it un-mangled while profiling, yet mangled for all the applications
>> means that the profile is invalid.
> 
> I believe we can do better than that.

I don't know what you mean. There is no "better" than correct.
A profile is not correct (i.e. valid) unless it characterizes
the behavior of the display from the point at which it is
being applied through to the point where the photons hit the users eyes.

Cheers,

Graeme Gill.


More information about the wayland-devel mailing list