[RFC 0/1] Color manager calibration protocol v1
Pekka Paalanen
ppaalanen at gmail.com
Tue Apr 16 12:30:11 UTC 2019
On Tue, 16 Apr 2019 13:11:06 +0200
Erwin Burema <e.burema at gmail.com> wrote:
> On Tue, 16 Apr 2019 at 12:45, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> >
> > On Sun, 14 Apr 2019 12:57:47 +0200
> > Erwin Burema <e.burema at gmail.com> wrote:
> >
> > > Without a way to calibrate/profile screens an color management
> > > protocol looses a lot of its value. So to add this missing feature I
> > > wrote the following protocol.
> > >
> > > The idea is that the calibration/profiling SW only sets the RGB
> > > triplet and then the compositor is responsible to draw a rectanglular
> > > region on the selected output screen, since not all calibration tools
> > > will be at the center of the screen a user should be able to modify
> > > the placement of this rectanglular region. Unless specified the
> > > monitor profile (if any) should not be applied but the GPU curve
> > > should, currently to set a new curve the calibration tool should
> > > generate a new ICC profile with the wanted curve in the VCGT tag (I
> > > am not sure if this is the best option but would make the most
> > > universal one). In the end after profiling the last uploaded ICC
> > > could then be saved (although a compositor is not required to honor
> > > the request in that case it should send the not saved error). If the
> > > compositor doesn't save or the connection with this protocol is
> > > broken the compositor should restore previous settings.
> >
> > Hi,
> >
> > I only took a very quick glance, but I do like where this design is
> > going. I'll refrain from commenting on wl_surface vs. not for now
> > though.
> >
> > Forgive me my ignorance, but why is the "GPU curve" needed to be a
> > custom curve provided by the client?
> >
>
> Because the GPU LUT/curve is the calibration; it is mostly used to
> smooth out non-linearity in the display (some expensive display have
> the possibility to upload this curve to the display instead in which
> case it is sometimes called a calibration curve)
Hi,
if you are only showing one solid color at a time, why does
non-linearity need addressing with the "GPU curve" instead of just
computing that into the color values the client sets?
The possibility to load the "GPU curve" into the monitor instead of
doing it in the display hardware is probably the reason, since inside
the monitor the LUT output is not limited by the wire bitness.
> > My naive thinking would assume that you would like to be able to
> > address the pixel values on the display wire as directly as possible,
> > which means a minimum of 12 or 14 bits per channel framebuffer format
> > and an identity "GPU curve".
> >
>
> Not sure where you get 12 or 14 bits since most displays are still 8
> currently (well actually a lot are 6 with dithering but you can't
> actually drive them with 6bpc), and yes firstly an identity curve will
> be need to applied, in practice the calibration process is iterative
> (I believe) so over time you will need to upload/set different curve
> (e.g. start with an identity curve, refine it, use the refined curve,
> refine that, use the next curve, etc until it is below a certain
> error)
12 or 14 came from your protocol spec.
Ok, you start with an identity curve and iterate. Why only the "GPU
curve" instead of a "full" color correction transformation?
> > Is the reason to use the "GPU curve" that you assume there is a 8 bits
> > per channel framebuffer and you need to use the hardware LUT to choose
> > which 8 bits wide range of the possibly 14 bits channel you want to
> > address? (Currently a client cannot know if the framebuffer is 8 bits
> > or less or more.)
> >
>
> Currently not assuming anything about the curves bitnes, so not sure
> where you got that from? The curve should be set with the ICC vcgt tag
> I am pretty sure that supports higher bit depths then 8, if you have a
> better idea to be able to set the video card LUT I would like to hear
> that.
If you do not assume anything about bitness, why does the protocol spec
have bitdepth? Is it the per-channel bitdepth of the framebuffer, the
wire, the monitor, the minimum for the whole pixel pipeline, or?
Any LUT configuration that is not identity will only reduce your
precision to program different pixel values into the monitor, which is
why I'm wondering why the LUT is not simply identity for measurements.
This is especially true if you do not assume that the bitness on the
wire is higher than the framebuffer's, or that the bitness in the
monitor is higher than the wire if the LUT is loaded into the monitor.
It's a mathematical fact, caused by having to work with integers. If
the LUT input and output have the same bitness, and the LUT is not
identity, then necessarily you will have some input values that do not
differ in their output values and some output values you cannot
reach from any input values.
> > Your protocol proposal uses the pixel encoding red/green/blue as uint
> > (32-bit) per channel. Would it be possible to have the compositor do
> > the LUT manipulation if it needs to avoid the intermediate rounding
> > caused by a 8 bit per channel framebuffer or color pipeline up to the
> > final LUT?
> >
>
> Not sure what you mean here, the values should set should be displayed
> as directly as possible to the screen with the exception of the
> application of the videocard LUT, now a compositor might choose to not
> use the videocard LUT for reasons (very simple HW that doesn't include
> a LUT comes to mind) in which case it should apply the calibration
> curve itself.
It is about having to use integers where the bitness limits us to
certain precision, see above.
You have 32-bit values in the protocol. Those get mapped into a
framebuffer which might be effectively 5, 6, 8, 10, 12, or 16 bits per
channel. Then assuming all hardware color manipulation has been turned
off except the final "GPU curve" LUT, the framebuffer value will be
converted into a LUT input value, a LUT output value is looked up and
maybe interpolated, then truncated back to LUT output bitness. Then it
gets pushed into the wire which has its own bitness, before it reaches
the monitor, which may convert that again to anything from 5 to 14 bits
per channel for the panel and maybe dither.
The precision you will get is the lowest bitness found in that path.
Except, depending on at which step the "GPU curve" is applied, you
might be able to use that to address more bits than what the lowest
bitness along the path allows, as long as the lowest bitness occurs
before the LUT.
Do you mean that calibration measurement tools do not use nor expect
such tricks?
> > If such "GPU curve" manipulation is necessary, it essentially means
> > nothing else can be shown on the output. Oh, could another reason to
> > have the client control the "GPU curve" be that then the client can
> > still show information on that output, since it can adjust the pixel
> > contents to remain legible even while applying the manipulation. Is
> > that used or desirable?
> >
>
> Calibration/profiling is a rather time consuming process where a piece
> of equipment will be hanging in front of your screen, so not being
> able to display much at that time won't be to much of a problem, so no
> doesn't have much to do with being able to display stuff.
That's the opposite of what has been said here before. Exactly because
it takes so long, the measurement apps needs to be able to show at
least a progress indicator. This was the objection to my past hand-wavy
proposal of a measurement app hijacking the whole output.
> > Btw. how would a compositor know the bit depth of a monitor and the
> > transport (wire)? I presume there should be some KMS properties for
> > that in addition to connector types.
> >
>
> Huh, this surprises me a bit would have expected KMS to know something
> about the screens attached and which bit depths are supported (10bit
> capable screens (non-HDR) have been around for quite some time now)
We know the bitdepth of the framebuffer since the compositor picks that.
I do not recall seeing anything about the wire, but I also never really
looked. Display hardware (in the computer) probably has at least the
same precision as the wire, if not more, at least in identity
transformation.
Note, that parsing EDID might tell you the bitness of the monitor
somehow, but it will not tell you the bitness of the wire, because I
think different transports (HDMI, DP, DVI-D, ...) each might support
more than one bitness. Unless the monitor only supports one bitness.
But at least with YUV on the HDMI wire, I recall there being different
pixel formats. Less bits, more fps.
Thanks,
pq
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/wayland-devel/attachments/20190416/ca761385/attachment.sig>
More information about the wayland-devel
mailing list