Deep Color support

Wolfgang Draxinger wdraxinger.maillist at draxit.de
Sun Apr 27 05:45:44 PDT 2014


On Sun, 27 Apr 2014 13:57:45 +0200
John Kåre Alsaker
<john.kare.alsaker at gmail.com> wrote:

> Since VGA is a ting of the past, I'm more interested in what the
> display do.

Display devices always were nonlinear in their signal response. That's
why gamma LUTs got introduced in the first place, to deal with the
saturation effects of vacuum tubes. Later the signal chain in LCDs
emulated that behavior.

The whole gamma mess is a history of bad decisions: Originally the idea
of gamma LUTs was to pre-emphase the scanout signal to compensate for
the display's gamma curve. So for your typical CRT you'd use a 1/2.2
gamma ramp, so that a linear scanout gradient would produce a linear
light intensity ramp. But then people started assumeing a linear scanout
ramp in everything they did and that displays followed a 2.2 gamma ramp
(sRGB made this a linear ramp for small values, followed by a 2.4
ramp). This was only furthered by (then) many hardware implementations
to be somewhat buggy in their LUT implementations and some operating
systems to have broken gamma LUT APIs.

If I had a say about this I'd throw the whole stuff into the trash and
have everything from the scanout buffer to the light intensity coming
out of the display being linear; right now you have that sRGB ramp in
most displays, but not all of them.

Oh, and sRGB of course is the worst of all color spaces; it has been
chosen as a compromise which all (then when it was specified)
available consumer devices could reasonably well reproduce. It's small
and it's arcane in the way it works (linear for values < 0.0031308 and
x^2.4 above); if you want to operate in nonlinear RGB then for do
yourself a favor and use AdobeRGB; it also has much more sensible
chromacity primaries.

The problem is not, that gamma ramps are a bad thing. They just
shouldn't be imposed by the display signal chain, but be part of the
involved color profiles.

For example in digital cinema images are encoded in X'Y'Z' color space
(that's the nomenclature in the DCI specs), which is a CIEXYZ1931 color
space with a x -> x^2.6 mapping applied. But this is an explicit part
of the color space profile and not some signal chain artifact; the
actual projection system operates on linear CIEXYZ values, where the
tristimuli linearly correspond to the illumination densities on the
projection screen (white point reference level being 48cd/m²).

> I wonder if there's a tool to measure this with my Spyder3, even
> though it's a moot point after calibration.

Or, you could simply read the specifications for DVI, HDMI and
DisplayPort: Also in digital links the pixel values go over the wire
linearly.

I'd say throwing the tools I have at hand would be kind of overkill. I
work in a laser research lab and we've got about every measurement tool
related to optometrics; I once salvaged a "broken" USB visible light
spectrometer from the trash, repaired it and now that's my calibration
tool :)

> We still are limited by both graphics memory and bandwidth.
> A fullscreen 64bpp 4K client will use about 128 MB for the
> framebuffer.

Indeed. OTOH clients not actively altering their contents and not being
visible are getting their image data swapped out of graphics memory.

Also if a client is invisible alltogether it's perfectly reasonable to
discard it's backingstore alltogether (FBOs stay intact of course);
this has been the behavior specified and also one of the reasons why I
started experimenting with GBM to bypass display servers' backingstore
managment. Of course this adds the cost of a full redraw at client
re-expose.

> On a high-DPI variant of my monitor that would be 225
> MB. It would not take many clients to fill my 1GB of VRAM.

Hmm, you've got a point there. However I don't see how High-DPI makes
an impact. You've got so many sub-/pixels to represent in memory,
namely the number of pixels that the display ultimately offers.

On the client side it doesn't make sense to render at a higher density
than what the highest resolving display can show; that's basic signal
theory; but then if you have a number of different displays connected
to a seat rendering to the greatest common divisor is sensible.

Multisampling yes, if you want subpixel rendering accuracy,
for example for font glyph rendering, multisampling comes in handy. But
then again on High-DPI you can turn down multisampling again, because
the higher pixel density does a far greater job at reducing aliasing.

> Do note that with wayland, what clients really do is render to a
> texture, which then get sampled by the compositor.

Yes, that much I understand.

> Another point is that clients usually will supply 32 bpp data, so the
> higher bit depth will probably be wasted.

Well, assuming clients do supply their data in a contact color
space 32 bpp don't cut it, if you want to include a useful alpha
channel. Without an high resolution alpha channel you can get away with
a A2RGB10 format, but just marginally (10 bits per channel already are
prone to banding in a perceptive color space, in a contact color space
you're spending some of the value range on out-of-gamut imaginary
colors).


Regards,

Wolfgang




More information about the wayland-devel mailing list