[RFC 0/1] Color manager calibration protocol v1

Erwin Burema e.burema at gmail.com
Tue Apr 16 13:33:02 UTC 2019



On Tuesday, 16 April 2019, Pekka Paalanen wrote:
> On Tue, 16 Apr 2019 13:11:06 +0200
> Erwin Burema <e.burema at gmail.com> wrote:
> 
> > On Tue, 16 Apr 2019 at 12:45, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> > >
> > > On Sun, 14 Apr 2019 12:57:47 +0200
> > > Erwin Burema <e.burema at gmail.com> wrote:
> > >  
> > > > Without a way to calibrate/profile screens an color management
> > > > protocol looses a lot of its value. So to add this missing feature I
> > > > wrote the following protocol.
> > > >
> > > > The idea is that the calibration/profiling SW only sets the RGB
> > > > triplet and then the compositor is responsible to draw a rectanglular
> > > > region on the selected output screen, since not all calibration tools
> > > > will be at the center of the screen a user should be able to modify
> > > > the placement of this rectanglular region. Unless specified the
> > > > monitor profile (if any) should not be applied but the GPU curve
> > > > should, currently to set a new curve the calibration tool should
> > > > generate a new ICC profile with the wanted curve in the VCGT tag (I
> > > > am not sure if this is the best option but would make the most
> > > > universal one). In the end after profiling the last uploaded ICC
> > > > could then be saved (although a compositor is not required to honor
> > > > the request in that case it should send the not saved error). If the
> > > > compositor doesn't save or the connection with this protocol is
> > > > broken the compositor should restore previous settings.  
> > >
> > > Hi,
> > >
> > > I only took a very quick glance, but I do like where this design is
> > > going. I'll refrain from commenting on wl_surface vs. not for now
> > > though.
> > >
> > > Forgive me my ignorance, but why is the "GPU curve" needed to be a
> > > custom curve provided by the client?
> > >  
> > 
> > Because the GPU LUT/curve is the calibration; it is mostly used to
> > smooth out non-linearity in the display (some expensive display have
> > the possibility to upload this curve to the display instead in which
> > case it is sometimes called a calibration curve)
> 
> Hi,
> 
> if you are only showing one solid color at a time, why does
> non-linearity need addressing with the "GPU curve" instead of just
> computing that into the color values the client sets?
>

You display only one color at the time since that is what the colorimeter/spectrometer can measure, to calculate the curve you would use a full gradient which had been measured over time.

 
> The possibility to load the "GPU curve" into the monitor instead of
> doing it in the display hardware is probably the reason, since inside
> the monitor the LUT output is not limited by the wire bitness.
> 
> > > My naive thinking would assume that you would like to be able to
> > > address the pixel values on the display wire as directly as possible,
> > > which means a minimum of 12 or 14 bits per channel framebuffer format
> > > and an identity "GPU curve".
> > >  
> > 
> > Not sure where you get 12 or 14 bits since most displays are still 8
> > currently (well actually a lot are 6 with dithering but you can't
> > actually drive them with 6bpc), and yes firstly an identity curve will
> > be need to applied, in practice the calibration process is iterative
> > (I believe) so over time you will need to upload/set different curve
> > (e.g. start with an identity curve, refine it, use the refined curve,
> > refine that, use the next curve, etc until it is below a certain
> > error)
> 
> 12 or 14 came from your protocol spec.
>

That is the display color depth and has no direct relation to tje cruves
 
> Ok, you start with an identity curve and iterate. Why only the "GPU
> curve" instead of a "full" color correction transformation?
>

Since we are trying to setup the "GPU curve" in this case a full transform would only get in the way. 
> > > Is the reason to use the "GPU curve" that you assume there is a 8 bits
> > > per channel framebuffer and you need to use the hardware LUT to choose
> > > which 8 bits wide range of the possibly 14 bits channel you want to
> > > address? (Currently a client cannot know if the framebuffer is 8 bits
> > > or less or more.)
> > >  
> > 
> > Currently not assuming anything about the curves bitnes, so not sure
> > where you got that from? The curve should be set with the ICC vcgt tag
> > I am pretty sure that supports higher bit depths then 8, if you have a
> > better idea to be able to set the video card LUT I would like to hear
> > that.
> 
> If you do not assume anything about bitness, why does the protocol spec
> have bitdepth? Is it the per-channel bitdepth of the framebuffer, the
> wire, the monitor, the minimum for the whole pixel pipeline, or?
>

Maximum of whole pixel pipeline

 
> Any LUT configuration that is not identity will only reduce your
> precision to program different pixel values into the monitor, which is
> why I'm wondering why the LUT is not simply identity for measurements.
> This is especially true if you do not assume that the bitness on the
> wire is higher than the framebuffer's, or that the bitness in the
> monitor is higher than the wire if the LUT is loaded into the monitor.
> 
> It's a mathematical fact, caused by having to work with integers. If
> the LUT input and output have the same bitness, and the LUT is not
> identity, then necessarily you will have some input values that do not
> differ in their output values and some output values you cannot
> reach from any input values.
>

For verification and profiling the LUT needs to be applied.
 
> > > Your protocol proposal uses the pixel encoding red/green/blue as uint
> > > (32-bit) per channel. Would it be possible to have the compositor do
> > > the LUT manipulation if it needs to avoid the intermediate rounding
> > > caused by a 8 bit per channel framebuffer or color pipeline up to the
> > > final LUT?
> > >  
> > 
> > Not sure what you mean here, the values should set should be displayed
> > as directly as possible to the screen with the exception of the
> > application of the videocard LUT, now a compositor might choose to not
> > use the videocard LUT for reasons (very simple HW that doesn't include
> > a LUT comes to mind) in which case it should apply the calibration
> > curve itself.
> 
> It is about having to use integers where the bitness limits us to
> certain precision, see above.
> 
> You have 32-bit values in the protocol. Those get mapped into a
> framebuffer which might be effectively 5, 6, 8, 10, 12, or 16 bits per
> channel. Then assuming all hardware color manipulation has been turned

I tried to specified that the with a n-bit pipeline only the n least significant bits of the uint should be used. The intent is to only send 8 bit data over an 8 bit piline and the same for 10, 12 or 14 bits 

> off except the final "GPU curve" LUT, the framebuffer value will be
> converted into a LUT input value, a LUT output value is looked up and
> maybe interpolated, then truncated back to LUT output bitness. Then it
> gets pushed into the wire which has its own bitness, before it reaches
> the monitor, which may convert that again to anything from 5 to 14 bits
> per channel for the panel and maybe dither.
> 
> The precision you will get is the lowest bitness found in that path.
> Except, depending on at which step the "GPU curve" is applied, you
> might be able to use that to address more bits than what the lowest
> bitness along the path allows, as long as the lowest bitness occurs
> before the LUT.
> 
> Do you mean that calibration measurement tools do not use nor expect
> such tricks?
>

Dithering yes most tools integrate, some of the other tricks are compensated for in software
 
> > > If such "GPU curve" manipulation is necessary, it essentially means
> > > nothing else can be shown on the output. Oh, could another reason to
> > > have the client control the "GPU curve" be that then the client can
> > > still show information on that output, since it can adjust the pixel
> > > contents to remain legible even while applying the manipulation. Is
> > > that used or desirable?
> > >  
> > 
> > Calibration/profiling is a rather time consuming process where a piece
> > of equipment will be hanging in front of your screen, so not being
> > able to display much at that time won't be to much of a problem, so no
> > doesn't have much to do with being able to display stuff.
> 
> That's the opposite of what has been said here before. Exactly because
> it takes so long, the measurement apps needs to be able to show at
> least a progress indicator. This was the objection to my past hand-wavy
> proposal of a measurement app hijacking the whole output.
>

We don't need the whole output to display the color patch so we can definitely show a progress bar as well but the display itself will be somewhat not useable during the calibration
 
> > > Btw. how would a compositor know the bit depth of a monitor and the
> > > transport (wire)? I presume there should be some KMS properties for
> > > that in addition to connector types.
> > >  
> > 
> > Huh, this surprises me a bit would have expected KMS to know something
> > about the screens attached and which bit depths are supported (10bit
> > capable screens (non-HDR) have been around for quite some time now)
> 
> We know the bitdepth of the framebuffer since the compositor picks that.
> 
> I do not recall seeing anything about the wire, but I also never really
> looked. Display hardware (in the computer) probably has at least the
> same precision as the wire, if not more, at least in identity
> transformation.
> 
> Note, that parsing EDID might tell you the bitness of the monitor
> somehow, but it will not tell you the bitness of the wire, because I
> think different transports (HDMI, DP, DVI-D, ...) each might support
> more than one bitness. Unless the monitor only supports one bitness.
> 
> But at least with YUV on the HDMI wire, I recall there being different
> pixel formats. Less bits, more fps.
>

Mmh that complicates things, although this kind of stuff should mostly be used for RGB signals. I think directly pushing YUV over the cable is mostly for HW media players.
 
> 
> Thanks,
> pq
>

Sorry for the short reply had to type this on my phone.

-- 
Sent from my Jolla


More information about the wayland-devel mailing list