On Wed, 19 May 2021 11:53:37 +0300 Pekka Paalanen ppaalanen@gmail.com wrote:
...
TL;DR:
I would summarise my comments so far into these:
Telling the kernel the color spaces and letting it come up with whatever color transformation formula from those is not enough, because it puts the render intent policy decision in the kernel.
Telling the kernel what color transformations need to be done is good, if it is clearly defined.
Using an enum-based UAPI to tell the kernel what color transformations needs to be done (e.g. which EOTF or EOTF^-1 to apply at a step in the abstract pipeline) is very likely ok for many Wayland compositors in most cases, but may not be sufficient for all use cases. Of course, one is always bound by what hardware can do, so not a big deal.
You may need to define mutually exclusive KMS properties (referring to my email in another branch of this email tree).
I'm not sure I (we?) can meaningfully review things like "SDR boost" property until we know ourselves how to composite different types of content together. Maybe someone else could.
Does this help or raise thoughts?
The work on Weston CM&HDR right now is aiming to get it up to a point where we can start nicely testing different compositing approaches and methods and parameters, and I expect that will also feed back into the Wayland CM&HDR protocol design as well.
I have forgot to mention one important thing:
Generic Wayland compositors will be using KMS planes opportunistically. The compositor will be switching between GL and KMS compositing on-demand, refresh by refresh. This means that both GL and KMS compositing must produce identical results, or users will be seeing "color flicks" on switch.
This is a practical reason why we really want to know in full detail how the KMS pipeline processes pixels.
Thanks, pq