[RFC wayland-protocols] Color management protocol
Daniel Stone
daniel at fooishbar.org
Wed Dec 21 13:54:00 UTC 2016
Hi Niels,
On 21 December 2016 at 13:24, Niels Ole Salscheider
<niels_ole at salscheider-online.de> wrote:
> On Wednesday, 21 December 2016, 12:05:12 CET, Daniel Stone wrote:
>> On 20 December 2016 at 20:49, Chris Murphy <lists at colorremedies.com> wrote:
>> > For example a still very common form of partial data loss is the dumb
>> > program that can open a JPEG but ignore EXIF color space metadata and
>> > an embedded ICC profile. What'd be nice is if the application doesn't
>> > have to know how to handle this: reading that data, and then tagging
>> > the image object with that color space metadata. Instead the
>> > application should use some API that already knows how to read files,
>> > knows what a JPEG is, knows all about the various metadata types and
>> > their ordering rules, and that library is what does the display
>> > request and knows it needs to attach the color space metadata. It's
>> > really the application that needs routing around.
>>
>> I agree (the specific JPEG case is something that's been bugging me
>> for some time), and it's something I want to build in to make it as
>> easy as possible for people to gradually build their clients to get
>> this right.
>
> This is really something that should be done by the toolkits (Qt, GTK, ...).
> I really hope that they start to read the profile from EXIF when opening an
> image. They can then either attach it to the subsurface that is used to
> display the image, or convert it to their blending space (which could match
> the blending space of the compositor) if blending is performed.
Sure. Complicated of course by things like embedded web views, but ...
>> Similarly, I'd like to park the discussion about surfaces split across
>> multiple displays; it's a red herring. Again, in X11, your pixel
>> content exists in one single flat buffer which is shared between
>> displays. This is not a restriction we have in Wayland, and a lot of
>> the discussion here has rat-holed on the specifics of how to achieve
>> this based on assumptions from X11. It's entirely possible that the
>> best solution to this (a problem shared with heterogeneous-DPI
>> systems) is to provide multiple buffers. Or maybe, as you suggest
>> below, normalised to an intermediate colour system of perhaps wider
>> gamut. Either way, there's a lot of ways to attack it, but how we
>> solve that is almost incidental to core colour-management design.
>
> Has there been any discussion about using a buffer per output to solve the
> heterogeneous-DPI problem? If we end up doing that we might as well use it for
> color correction. But otherwise I would prefer the device link profile
> solution.
No, it's something I've just thrown out here because I thought this
thread was too relentlessly productive and on-topic. It's _a_ possible
solution which doesn't seem immediately useless though, so that's
something. I was mostly using it though, to illustrate that there may
be better long-term solutions than are immediately obvious.
>> > So then I wonder where the real performance penalty is these days?
>> > Video card LUT is a simplistic 2D transform. Maybe the "thing" that
>> > ultimately pushes pixels to each display, can push those pixels
>> > through a software 2D LUT instead of the hardware one, and do it on 10
>> > bits per channel rather than on full bit data.
>>
>> Some of the LUTs/matrices in display controllers (see a partial
>> enumeration in reply to Mattias) can already handle wide-gamut colour,
>> with caveats. Sometimes they will be perfectly appropriate to use, and
>> sometimes the lack of granularity will destroy much of their value. If
>> the compositor is using the GPU for composition, then doing colour
>> transformations is extremely cheap, because we're rarely bound on the
>> GPU's ALU capacity.
>
> Yes, but as Graeme has pointed out doing it in a shader means lower precision
> when using a 8 bit framebuffer. How feasible is it to use a higher resolution
> framebuffer and how big would the performance impact be?
Well yeah, if you're using an 8-bit framebuffer then that caps your
effective precision. But even ignoring the fact that intermediate
calculations within shaders happen at vastly higher precision (often
32bpc) than either your source or destination buffer, I don't know of
hardware which supports 10bpc sampler targets but not render targets.
Meaning that sudden demotion to 8bpc precision doesn't just happen;
you either succeed or fail in the first.
(By way of example, Mesa does not usefully expose 10bpc formats for
non-Intel drivers right now; the hardware and underlying drivers do;
it's just a small bit of missing glue code. On the other hand, there
is no silent jump between precision; it just wouldn't work.)
Cheers,
Daniel
More information about the wayland-devel
mailing list