[RFC 0/1] Color manager calibration protocol v1
e.burema at gmail.com
Tue Apr 16 21:42:30 UTC 2019
On Tue, 16 Apr 2019 at 17:03, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> On Tue, 16 Apr 2019 13:33:02 +0000
> Erwin Burema <e.burema at gmail.com> wrote:
> > On Tuesday, 16 April 2019, Pekka Paalanen wrote:
> > > On Tue, 16 Apr 2019 13:11:06 +0200
> > > Erwin Burema <e.burema at gmail.com> wrote:
> > >
> > > > On Tue, 16 Apr 2019 at 12:45, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> > > > >
> > > > > On Sun, 14 Apr 2019 12:57:47 +0200
> > > > > Erwin Burema <e.burema at gmail.com> wrote:
> > > > >
> > > > > > Without a way to calibrate/profile screens an color management
> > > > > > protocol looses a lot of its value. So to add this missing feature I
> > > > > > wrote the following protocol.
> > > > > >
> > > > > > The idea is that the calibration/profiling SW only sets the RGB
> > > > > > triplet and then the compositor is responsible to draw a rectanglular
> > > > > > region on the selected output screen, since not all calibration tools
> > > > > > will be at the center of the screen a user should be able to modify
> > > > > > the placement of this rectanglular region. Unless specified the
> > > > > > monitor profile (if any) should not be applied but the GPU curve
> > > > > > should, currently to set a new curve the calibration tool should
> > > > > > generate a new ICC profile with the wanted curve in the VCGT tag (I
> > > > > > am not sure if this is the best option but would make the most
> > > > > > universal one). In the end after profiling the last uploaded ICC
> > > > > > could then be saved (although a compositor is not required to honor
> > > > > > the request in that case it should send the not saved error). If the
> > > > > > compositor doesn't save or the connection with this protocol is
> > > > > > broken the compositor should restore previous settings.
> > > > >
> > > > > Hi,
> > > > >
> > > > > I only took a very quick glance, but I do like where this design is
> > > > > going. I'll refrain from commenting on wl_surface vs. not for now
> > > > > though.
> > > > >
> > > > > Forgive me my ignorance, but why is the "GPU curve" needed to be a
> > > > > custom curve provided by the client?
> ok, below we are finally getting to the point.
> > > Ok, you start with an identity curve and iterate. Why only the "GPU
> > > curve" instead of a "full" color correction transformation?
> > >
> > Since we are trying to setup the "GPU curve" in this case a full
> > transform would only get in the way.
> The goal is to create a full color transform from some client chosen
> color space into the output color space, so that we can determine the
> color profile of the output. You might iterate only on the curve at
> first, but do you not then iterate on the full transform as well to
> ensure you get the result you want, at least to check once? How will do
> that with your extension?
No here we are not interested in the full transform that would be to
limited we are only interested in the output part, especially we are
interested in how a certain RGB triplet send to the display maps to
X'Y'Z' (which is an absolute color space used as one of the two
options for the color connection space in the icc profile) the X'Y'Z'
triplet will be provided by the colorimeter/spectrometer which measure
the patch. This of course means that before we have a profile we have
to iterate over many patches. My extension is meant to be able for
calibration software to set a certain RGB triplet for the patch after
which the software will read out the X'Y'Z' value from the calibration
tool (most likely via USB these days, although serial is also an
The above is what is called characterization or profiling, another
step that (optionally) happens before this is called calibration and
is where we use the screen settings and the video card LUT to setup
the display in a preferred state with a nice response if we have a
really nice screen we might be able to get away with only calibrating
and telling software it is sRGB/AdobeRGB in practice we need both
(especially on cheaper screens).
> Or asking in other words: why do you need the compositor to implement a
> special-case path where it applies only a part of an ICC profile?
We are skipping the whole ICC profile, or at least the parts that will
be actually used for color management, the VCGT tag is an optional
non-standard extension useful to keep calibration and profiling
information for a screen together. Color management will work fine
without it (just need another way to store calibration information but
that is a solvable problem)
> Why not let the compositor apply the output profile you submit in full,
> and be smart about what you put in that profile?
Since we don't have an output profile yet? And this is all about
creating one (+ calibration)
> Are you trying to avoid having to tag the test color with a color
No, although making that optionally possible might be nice for verification.
> To me it would make most sense if you picked one or the other: either
> pass the test color values to the wire directly, which means you do not
> play with the "GPU curve" as it will be identity; or use the full
> existing color correction pipeline just like normal apps do, tag your
> test color with a suitably manufactured color profile which results in
> an identity transformation on the first iteration, and iterate on the
> output profile you set.
But the calibration curve is strictly speaking not part of the profile
and some screens really need that to create an at least somewhat
usable profile, after setting the videocard LUT we of course need to
characterize the output but as I said for that we don't need an input
profile (just raw RGB triplets to the screen).
> I think this is something Graeme tried to explain to me: using a
> special path that does half of the operations is prone to bugs because
> it does non-trivial things, affects the result of measurements, but is
> normally not used. Although I think he would also object to the "pass
> directly to the wire" mode as well, because that also is a special path
> not used normally.
Graemme also said the following as author of Argyll:
Another good description comes from Elle Stone
So yes we do need a special direct pass and pass via video card LUT
modes, and is another reason I prefer to not use a wl_surface for
this, that would be to easily abused.
> > > > > Is the reason to use the "GPU curve" that you assume there is a
> > > > > 8 bits per channel framebuffer and you need to use the hardware
> > > > > LUT to choose which 8 bits wide range of the possibly 14 bits
> > > > > channel you want to address? (Currently a client cannot know if
> > > > > the framebuffer is 8 bits or less or more.)
> > > > >
> > > >
> > > > Currently not assuming anything about the curves bitnes, so not
> > > > sure where you got that from? The curve should be set with the
> > > > ICC vcgt tag I am pretty sure that supports higher bit depths
> > > > then 8, if you have a better idea to be able to set the video
> > > > card LUT I would like to hear that.
> > >
> > > If you do not assume anything about bitness, why does the protocol
> > > spec have bitdepth? Is it the per-channel bitdepth of the
> > > framebuffer, the wire, the monitor, the minimum for the whole pixel
> > > pipeline, or?
> > Maximum of whole pixel pipeline
> I don't see how that could be useful.
I think we have a bit of a misunderstanding going on here, and
actually meant the same thing namely MIN( MAX(screen), MAX(GPU),
MAX(communication) ), is that right?
> > I tried to specified that the with a n-bit pipeline only the n least
> > significant bits of the uint should be used. The intent is to only
> > send 8 bit data over an 8 bit piline and the same for 10, 12 or 14
> > bits
> Oh, so the bitdepth value in the event is really just a scaling factor
> for the test color values?
Sort of? 8bit would be sent as 0...0xxxxxxxx (so 24 zeroes followed by
the actual 8 bit value), 10 bit would be 0...0xxxxxxxxxx (22 zeroes
followed by 10 bit value), etc. I don't think that is scalling?
> Ok, that explains why it is so vaguely defined and quite arbitrary.
> Why not use the full 32 bits mapping to [0.0, 1.0] always, let the
> compositor convert it to whatever format the framebuffer uses, and
> remove the event?
Will need to inspect the code of Argyll or get an answer from Graemme
because this is a bit of an assumption of mine that we would need the
actual display bitdepth, if we don't we can just as well do what you
> > > off except the final "GPU curve" LUT, the framebuffer value will be
> > > converted into a LUT input value, a LUT output value is looked up
> > > and maybe interpolated, then truncated back to LUT output bitness.
> > > Then it gets pushed into the wire which has its own bitness, before
> > > it reaches the monitor, which may convert that again to anything
> > > from 5 to 14 bits per channel for the panel and maybe dither.
> > >
> > > The precision you will get is the lowest bitness found in that path.
> > > Except, depending on at which step the "GPU curve" is applied, you
> > > might be able to use that to address more bits than what the lowest
> > > bitness along the path allows, as long as the lowest bitness occurs
> > > before the LUT.
> > >
> > > Do you mean that calibration measurement tools do not use nor expect
> > > such tricks?
> > >
> > Dithering yes most tools integrate, some of the other tricks are
> > compensated for in software
> Sorry, I didn't mean compensating. I meant that the measurement program
> could leverage modifying the LUT to access more values on the wire than
> it could with an identity LUT.
> An example:
> The display pipeline only accepts values [0, 255], it has a LUT where
> input values can be [0, 255] and output values are [0, 1023]. The
> output value gets fed into the wire, the transport is using 10 bits per
> If you program an identity LUT and nothing else, you'll cover the [0,
> 1023] range with 256 points, so you miss in precision.
> If you program the LUT to map [0, 255] to [0, 255], then you can cover
> the lowest quarter of the output space in full precision. Then you
> program the LUT again to map [0, 255] to [256, 511], and you can cover
> another sub-range in full precision, and so on.
> This is what I assumed you wanted to do, when you needed direct access
> to the LUT.
No that is not what I (or anyone as far as I know) wants to do, what I
do want to do is explained above
> > > We know the bitdepth of the framebuffer since the compositor picks
> > > that.
> > >
> > > I do not recall seeing anything about the wire, but I also never
> > > really looked. Display hardware (in the computer) probably has at
> > > least the same precision as the wire, if not more, at least in
> > > identity transformation.
> > >
> > > Note, that parsing EDID might tell you the bitness of the monitor
> > > somehow, but it will not tell you the bitness of the wire, because I
> > > think different transports (HDMI, DP, DVI-D, ...) each might support
> > > more than one bitness. Unless the monitor only supports one bitness.
> > >
> > > But at least with YUV on the HDMI wire, I recall there being
> > > different pixel formats. Less bits, more fps.
> > >
> > Mmh that complicates things, although this kind of stuff should
> > mostly be used for RGB signals. I think directly pushing YUV over the
> > cable is mostly for HW media players.
> Yeah, YUV was a stretch, but I used it to prove that different
> transport formats do exist. Someone might want to measure a TV that
> only takes YUV420 for 4k at 60Hz or whatever.
> But even with RGB, I would assume that some wires can carry both 8 and
> 10 bit channels, at least.
> However, since the bitdepth does not seem to be needed at all, there is
> no problem. I only assumed bitdepth would be crucial for the
> measurements itself, not just to define a test color.
Not entirely sure if bitdepth can be discarded but will come back to
you on that one, if it can that would indeed simply things further.
No problem, hope the above clarify things!
More information about the wayland-devel