[RFC wayland-protocols v2 1/1] Add the color-management protocol

Kai-Uwe ku.b-list at gmx.de
Thu Feb 28 19:58:04 UTC 2019


Am 28.02.19 um 12:37 schrieb Pekka Paalanen:
> On Thu, 28 Feb 2019 09:12:57 +0100
> Kai-Uwe <ku.b-list at gmx.de> wrote:
>
>> Am 27.02.19 um 14:17 schrieb Pekka Paalanen:
>>> On Tue, 26 Feb 2019 18:56:06 +0100
>>> Kai-Uwe <ku.b-list at gmx.de> wrote:
>>>  
>>>> Am 26.02.19 um 16:48 schrieb Pekka Paalanen:  
>>>>> On Sun, 22 Jan 2017 13:31:35 +0100
>>>>> Niels Ole Salscheider <niels_ole at salscheider-online.de> wrote:
>>>>>    
>>>>>> Signed-off-by: Niels Ole Salscheider <niels_ole at salscheider-online.de>  
>>>>>>
>>>>>> +    <request name="set_device_link_profile">
>>>>>> +      <description summary="set a device link profile for a wl_surface and wl_output">
>>>>>> +        With this request, a device link profile can be attached to a
>>>>>> +        wl_surface. For each output on which the surface is visible, the
>>>>>> +        compositor will check if there is a device link profile. If there is one
>>>>>> +        it will be used to directly convert the surface to the output color
>>>>>> +        space. Blending of this surface (if necessary) will then be performed in
>>>>>> +        the output color space and after the normal blending operations.    
>>>>> Are those blending rules actually implementable?
>>>>>
>>>>> It is not generally possible to blend some surfaces into a temporary
>>>>> buffer, convert that to the next color space, and then blend some more,
>>>>> because the necessary order of blending operations depends on the
>>>>> z-order of the surfaces.
>>>>>
>>>>> What implications does this have on the CRTC color processing pipeline?
>>>>>
>>>>> If a CRTC color processing pipeline, that is, the transformation from
>>>>> framebuffer values to on-the-wire values for a monitor, is already set
>>>>> up by the compositor's preference, what would a device link profile
>>>>> look like? Does it produce on-the-wire or blending space?
>>>>>
>>>>> If the transformation defined by the device link profile produced
>>>>> values for the monitor wire, then the compositor will have to undo the
>>>>> CRTC pipeline transformation during composition for this surface, or it
>>>>> needs to reset CRTC pipeline setup to identity and apply it manually
>>>>> for all other surfaces.
>>>>>
>>>>> What is the use for a device link profile?    
>>>> A device link profile is useful to describe a transform from a buffer to
>>>> a match one specific output. Device links can give a very fine grained
>>>> control to applications to decide what they want with their colors. This
>>>> is useful in case a application want to circumvent the default gamut
>>>> mapping optimise for each output connected to a computer or add color
>>>> effects like proofing. The intelligence is inside the device link
>>>> profile and the compositor applies that as a dump rule.  
>>> Hi Kai-Uwe,
>>>
>>> right, thank you. I did get the feeling right on what it is supposed to
>>> do, but I have hard time imagining how to implement that in a compositor
>>> that also needs to cater for other windows on the same output and blend
>>> them all together correctly.
>>>
>>> Even without blending, it means that the CRTC color manipulation
>>> features cannot really be used at all, because there are two
>>> conflicting transformations to apply: from compositor internal
>>> (blending) space to the output space, and from the application content
>>> space through the device link profile to the output space. The only
>>> way that could be realized without any additional reverse
>>> transformations is that the CRTC is set as an identity pass-through,
>>> and both kinds of transformations are done in the composite rendering
>>> with OpenGL or Vulkan.  
>> What are CRTC color manipulation features in wayland? blending?
Hello Pekka,
> Wayland exposes nothing of CRTC capabilities. I think that is the best.
>
> Blending windows together is implicit from allowing pixel formats with
> alpha. Even then, from the client perspective such blending is limited
> to sub-surfaces, since those are all a client is aware of.
...
>>> If we want device link profiles in the protocol, then I think that is
>>> the cost we have to pay. But that is just about performance, while to
>>> me it seems like correct blending would be impossible to achieve if
>>> there was another translucent window on top of the window using a
>>> device link profile. Or even worse, a stack like this:
>>>
>>> window B (color profile)
>>> window A (device link profile)
>>> wallpaper (color profile)  
>> Thanks for the simplification.
>>
>>> If both windows have translucency somewhere, they must be blended in
>>> that order. The blending of window A cannot be postponed after the
>>> others.  
>> Remembers me on the discussions we had with the Cairo people years ago.
> Was the conclusion the same, or have I mistaken something?

My general impression was, with the need to fit outside requirements
(early color managed colors) came in conflict with concepts of Cairo.
The corner cases where the conflict of blending output referred colors,
which need to be occasionally in blending space, like you pointed out
for Wayland too. (But that is reasonably solved in other API's too.
Imagine Postscript suddenly presenting transparencies from
PDF->Postscript conversions. Then there are areas in the postscript with
almost pass through of vectors and areas, which need blending and are
converted reasonably to pixels.)

>>> I guess that implies that if even one surface on an output uses a
>>> device link profile, then all blending must be done in the output color
>>> space instead of an intermediate blending space. Is that an acceptable
>>> trade-off?  
>> It will make "high quality" apps look like blending fun stoppers. Not so
>> nice. In contrast, the conversion back from output space to blending
>> space then blending and then conversion to output will maintain the
>> blending space experience at some performance cost and break the
>> original device link intent. Would that fit a trade-off for you? (So app
> Yes, that is exactly the conflict I meant.
>
>> client developers should never use translucent portions. However the
>> toolkit or compositor might enforce this, e.g. for window decoration,
>> that outside translucency would break the app intention.)
> I really cannot say, I have no opinion on the matter so far. A
> compositor could be implemented either way.

Perhaps the sub surface in wayland is a means for apps, to express the
intention of a pass through for colors with hopefully no blending. At
least Dmitry Kazakov mentioned a concept with similarities for
implementing display HDR support (Windows) inside Krita (open source
image editor) canvas.

> It is not just about window A with the device link profile, it is
> window B on top of that whose translucency would be a problem, since
> window A "forces" the color space to become the output color space
> before window B can be blended in. Or indeed, having to convert from
> output space to blending space for blending window B and then back to
> output space again.

That sounds all not simple. But either that whole conversion or
splitting the affected regions into sub subsurfaces is one of the few
possibilities I see here.

>>> Does that even make any difference if the output space was linear at
>>> blending step, and gamma was applied after that?  
>> Interesting. There came the argument that the complete graphics pipeline
>> should be used for measuring the device for profile generation. So, if
>> linear (blending) space is always shown to applications and measurement
>> tools, then that should be profiled and fine. Anyway, a comment from
>> Graeme Gill would be welcome, on how to profile in linear space? As a
>> side effect, wayland/OpenGL can do the PQ/HLG decoding afterwards,
>> without the ICC profiling tool have to worry about. I guess all the
>> dynamic HDR features will more or less destroy the static connection of
>> input values to display values. The problem for traditional color
>> management is that linearisation, or as others spell it, gamma transfer
>> will be a very late step inside the monitor to do correctly. So one
>> arising question is, how will the 1D graphic card LUT (VCGT) be
>> implemented? With VCGT properly supported, the output profile could
>> still work on sufficiently well prepared device behaviour.
>>
>> Here a possible process chain:
>>
>> * app buffer (device link for proofing or source profile) ->[ICC
>> conversion in compositor]->
>>
>> * blending space (linear) ->[gamma 2.2]->
>>
>> * calibration protocol ->[1D calibration (VCGT)]->
>>
>> * [encoding for on-the-wire values PQ/...]->
>>
>> * transfer to device
>>
>> The calibration protocol is there for completeness.
> I'm happy to have manged to say something interesting! :-)

Hehe ;-) Personally I am very thankful for you all to take part in this
discussions and that over years.

regards
Kai-Uwe Behrmann

> This is where we start going beyond my knowledge. However, the chain
> does look reasonable to me, I don't see any problems Wayland protocol
> wise or exposing awkward details, if not the assumption of a single LUT
> in the pipeline (VCGT) but I suppose that is a concept in ICC and only
> thing that matter is that it gets applied correctly (e.g. in Vulkan),
> not that it is actually programmed to specific registers in a video
> card.
>
> **
>
> Everyone,
>
> another thought about a compositor implementation detail I would like
> to ask you all is about the blending space.
>
> If the compositor blending space was CIE XYZ with direct (linear)
> encoding to IEEE754 32-bit float values in pixels, with the units of Y
> chosen to match an absolute physical luminance value (or something that
> corresponds with the HDR specifications), would that be sufficient for
> all imaginable and realistic color reproduction purposes, HDR included?
>
> Or do I have false assumptions about HDR specifications and they do
> not define brightness in physical absolute units but somehow in
> relative units? I think I saw "nit" as the unit somewhere which is an
> absolute physical unit.
>
> It might be heavy to use, both storage wise and computationally, but I
> think Weston should start with a gold standard approach that we can
> verify to be correct, encode the behaviour into the test suite, and
> then look at possible optimizations by looking at e.g. other blending
> spaces or opportunistically skipping the blending space.
>
> Would that color space work universally from the colorimetry and
> precision perspective, with any kind of gamut one might want/have, and
> so on?
>
> Meaning, that all client content gets converted according to the client
> provided ICC profiles to CIE XYZ, composited/blended, and then
> converted to output space according to the output ICC profile. In my
> mind, the conversion of client content to CIE XYZ would happen as part
> of sampling the client texture, so that client data remains in the
> format the client provided it, and only the shadow framebuffer for
> blending would need to be 32-bit per channel format. At least for a
> start.
>
>
> Thanks,
> pq



More information about the wayland-devel mailing list