[RFC wayland-protocols v2 1/1] Add the color-management protocol
ppaalanen at gmail.com
Thu Feb 28 11:37:27 UTC 2019
On Thu, 28 Feb 2019 09:12:57 +0100
Kai-Uwe <ku.b-list at gmx.de> wrote:
> Am 27.02.19 um 14:17 schrieb Pekka Paalanen:
> > On Tue, 26 Feb 2019 18:56:06 +0100
> > Kai-Uwe <ku.b-list at gmx.de> wrote:
> >> Am 26.02.19 um 16:48 schrieb Pekka Paalanen:
> >>> On Sun, 22 Jan 2017 13:31:35 +0100
> >>> Niels Ole Salscheider <niels_ole at salscheider-online.de> wrote:
> >>>> Signed-off-by: Niels Ole Salscheider <niels_ole at salscheider-online.de>
> >>>> + <request name="set_device_link_profile">
> >>>> + <description summary="set a device link profile for a wl_surface and wl_output">
> >>>> + With this request, a device link profile can be attached to a
> >>>> + wl_surface. For each output on which the surface is visible, the
> >>>> + compositor will check if there is a device link profile. If there is one
> >>>> + it will be used to directly convert the surface to the output color
> >>>> + space. Blending of this surface (if necessary) will then be performed in
> >>>> + the output color space and after the normal blending operations.
> >>> Are those blending rules actually implementable?
> >>> It is not generally possible to blend some surfaces into a temporary
> >>> buffer, convert that to the next color space, and then blend some more,
> >>> because the necessary order of blending operations depends on the
> >>> z-order of the surfaces.
> >>> What implications does this have on the CRTC color processing pipeline?
> >>> If a CRTC color processing pipeline, that is, the transformation from
> >>> framebuffer values to on-the-wire values for a monitor, is already set
> >>> up by the compositor's preference, what would a device link profile
> >>> look like? Does it produce on-the-wire or blending space?
> >>> If the transformation defined by the device link profile produced
> >>> values for the monitor wire, then the compositor will have to undo the
> >>> CRTC pipeline transformation during composition for this surface, or it
> >>> needs to reset CRTC pipeline setup to identity and apply it manually
> >>> for all other surfaces.
> >>> What is the use for a device link profile?
> >> A device link profile is useful to describe a transform from a buffer to
> >> a match one specific output. Device links can give a very fine grained
> >> control to applications to decide what they want with their colors. This
> >> is useful in case a application want to circumvent the default gamut
> >> mapping optimise for each output connected to a computer or add color
> >> effects like proofing. The intelligence is inside the device link
> >> profile and the compositor applies that as a dump rule.
> > Hi Kai-Uwe,
> > right, thank you. I did get the feeling right on what it is supposed to
> > do, but I have hard time imagining how to implement that in a compositor
> > that also needs to cater for other windows on the same output and blend
> > them all together correctly.
> > Even without blending, it means that the CRTC color manipulation
> > features cannot really be used at all, because there are two
> > conflicting transformations to apply: from compositor internal
> > (blending) space to the output space, and from the application content
> > space through the device link profile to the output space. The only
> > way that could be realized without any additional reverse
> > transformations is that the CRTC is set as an identity pass-through,
> > and both kinds of transformations are done in the composite rendering
> > with OpenGL or Vulkan.
> What are CRTC color manipulation features in wayland? blending?
Wayland exposes nothing of CRTC capabilities. I think that is the best.
Blending windows together is implicit from allowing pixel formats with
alpha. Even then, from the client perspective such blending is limited
to sub-surfaces, since those are all a client is aware of.
> > If we want device link profiles in the protocol, then I think that is
> > the cost we have to pay. But that is just about performance, while to
> > me it seems like correct blending would be impossible to achieve if
> > there was another translucent window on top of the window using a
> > device link profile. Or even worse, a stack like this:
> > window B (color profile)
> > window A (device link profile)
> > wallpaper (color profile)
> Thanks for the simplification.
> > If both windows have translucency somewhere, they must be blended in
> > that order. The blending of window A cannot be postponed after the
> > others.
> Remembers me on the discussions we had with the Cairo people years ago.
Was the conclusion the same, or have I mistaken something?
> > I guess that implies that if even one surface on an output uses a
> > device link profile, then all blending must be done in the output color
> > space instead of an intermediate blending space. Is that an acceptable
> > trade-off?
> It will make "high quality" apps look like blending fun stoppers. Not so
> nice. In contrast, the conversion back from output space to blending
> space then blending and then conversion to output will maintain the
> blending space experience at some performance cost and break the
> original device link intent. Would that fit a trade-off for you? (So app
Yes, that is exactly the conflict I meant.
> client developers should never use translucent portions. However the
> toolkit or compositor might enforce this, e.g. for window decoration,
> that outside translucency would break the app intention.)
I really cannot say, I have no opinion on the matter so far. A
compositor could be implemented either way.
It is not just about window A with the device link profile, it is
window B on top of that whose translucency would be a problem, since
window A "forces" the color space to become the output color space
before window B can be blended in. Or indeed, having to convert from
output space to blending space for blending window B and then back to
output space again.
> > Does that even make any difference if the output space was linear at
> > blending step, and gamma was applied after that?
> Interesting. There came the argument that the complete graphics pipeline
> should be used for measuring the device for profile generation. So, if
> linear (blending) space is always shown to applications and measurement
> tools, then that should be profiled and fine. Anyway, a comment from
> Graeme Gill would be welcome, on how to profile in linear space? As a
> side effect, wayland/OpenGL can do the PQ/HLG decoding afterwards,
> without the ICC profiling tool have to worry about. I guess all the
> dynamic HDR features will more or less destroy the static connection of
> input values to display values. The problem for traditional color
> management is that linearisation, or as others spell it, gamma transfer
> will be a very late step inside the monitor to do correctly. So one
> arising question is, how will the 1D graphic card LUT (VCGT) be
> implemented? With VCGT properly supported, the output profile could
> still work on sufficiently well prepared device behaviour.
> Here a possible process chain:
> * app buffer (device link for proofing or source profile) ->[ICC
> conversion in compositor]->
> * blending space (linear) ->[gamma 2.2]->
> * calibration protocol ->[1D calibration (VCGT)]->
> * [encoding for on-the-wire values PQ/...]->
> * transfer to device
> The calibration protocol is there for completeness.
I'm happy to have manged to say something interesting! :-)
This is where we start going beyond my knowledge. However, the chain
does look reasonable to me, I don't see any problems Wayland protocol
wise or exposing awkward details, if not the assumption of a single LUT
in the pipeline (VCGT) but I suppose that is a concept in ICC and only
thing that matter is that it gets applied correctly (e.g. in Vulkan),
not that it is actually programmed to specific registers in a video
another thought about a compositor implementation detail I would like
to ask you all is about the blending space.
If the compositor blending space was CIE XYZ with direct (linear)
encoding to IEEE754 32-bit float values in pixels, with the units of Y
chosen to match an absolute physical luminance value (or something that
corresponds with the HDR specifications), would that be sufficient for
all imaginable and realistic color reproduction purposes, HDR included?
Or do I have false assumptions about HDR specifications and they do
not define brightness in physical absolute units but somehow in
relative units? I think I saw "nit" as the unit somewhere which is an
absolute physical unit.
It might be heavy to use, both storage wise and computationally, but I
think Weston should start with a gold standard approach that we can
verify to be correct, encode the behaviour into the test suite, and
then look at possible optimizations by looking at e.g. other blending
spaces or opportunistically skipping the blending space.
Would that color space work universally from the colorimetry and
precision perspective, with any kind of gamut one might want/have, and
Meaning, that all client content gets converted according to the client
provided ICC profiles to CIE XYZ, composited/blended, and then
converted to output space according to the output ICC profile. In my
mind, the conversion of client content to CIE XYZ would happen as part
of sampling the client texture, so that client data remains in the
format the client provided it, and only the shadow framebuffer for
blending would need to be 32-bit per channel format. At least for a
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 833 bytes
Desc: OpenPGP digital signature
More information about the wayland-devel