[cairo] Re: Linear colorspace not a good idea
Russell Shaw
rjshaw at netspace.net.au
Mon Mar 13 19:10:22 PST 2006
Bill Spitzak wrote:
> Although I wrote a lot of arguments in favor of using linear space (see
> http://mysite.verizon.net/spitzak/conversion/index.html),
Hi,
"Linear Floating Point means an image represented as floating point numbers where the luminance of a
pixel is the number multiplied by a constant. I set the constant so that 1.0 is the brightest color
you can see on a typical monitor and the value you want to convert to the largest value in a ypical
clipped image format such as a jpeg file."
...Isn't all colour graphics currently stored as linear light-intensity luminence?
"sRGB is a standard to encode luminance into 8 bits (or into any integer space). This standard was
developed by Hewlett-Packard and MicroSoft, and has been endorsed by the W3C, EXIF, Intel, Pantone,
Corel, and many other industry players. It is also well accepted by Open Source software such as
the Gimp and PNG file formats."
"What the standard does is define the luminance of a value stored in an image file. This is
a relative luminance, where 1.0 means "the brightest color the display can do". After scaling a
number from a file into the 0-1 range, sRGB defines the luminance by the function:"
Voltage_from_video_DAC_to_tube_grid = v < .04045 ? v / 12.92 : pow((v+.055)/1.055, 2.4)
I assume v is the data in the graphics file that represents luminance (brightness of a colour channel) ?
V = a.(Li)^y1 Lo = b.(V)^y2
+---+ +---+
Li--->| |----->File----->| |---->Lo light intensity = b.(a.(Li)^y1)^y2
light +---+ ^ +---+
intensity camera | CRT
|
|
+-- V = a.(Li)^y1 (voltage)
To get linear intensity out the CRT vs intensity into the camera, the tube
gamma must be the inverse of the camera gamma, so y2 = 1/y1.
Vidicon cameras have a gamma=0.5-0.7, so their voltage out vs light intensity looks
like a square-root curve. The CRT has a parabolic-like intensity out vs voltage in,
so this hopefully linearizes the vidicon stored-file intensity voltage.
Monitor/video-card can be adjusted so that y2 = 1/y1 precisely.
The problem is that the file data still represents intensity in a nonlinear way.
Doubling the data will result in a CRT output intensity of something like four times,
so doing things like linear ramps or gradients on the data will result in a CRT display
where the intensity gradient is compressed at the low intensity end and over-expanded
at the high intensity end.
If eg, the red channel of a photo was scaled up, the relative proportion of red to the
other colours in the photo will vary depending on the red intensity, instead of remaining
constant, so distorting all the perceived colours from the expected primary color-mixing.
The only way to fix all this is to store only gamma corrected data from the camera (so
camera gamma y1=1.0), so that file data represents linear light level. This means the CRT
output light intensity vs data_into_video_card must be linear, so the monitor-video-card
combined gamma is 1.0. Therefore, the video card must have a gamma that is the inverse of
the CRT, or 1/y2.
Your page is unclear to me without adequate mathematics, but i assume it is what you
were getting at.
http://www.srgb.com/
Sorry, this site is no longer available.
> there are
> serious problems with using this for GUI. I think Cairo has to do all
> compositing in the device space (ie sRGB for most of the devices we are
> interested in, but allow the backend to decide).
I'm not sure of "device space". Is this the nonlinear luminance data of an
uncompensated camera?
> First is that there is a huge supply of icons that have been designed to
> be composited in sRGB space and will have bad edges otherwise when put
> atop the background they are designed for.
I'd correct any icons to assume a linear camera and CRT (y1=y2=1.0).
There's too many "programmers" clueless about gamma and there's no sense
in propagating their mistakes.
> Premultiplied images are almost impossible to correct and then composite
> in linear space. Many programs will produce very noisy or incorrect
> color values when the alpha is small, dividing by alpha to convert to
> linear will amplify this noise to unacceptable levels.
>
> Large areas of partial transparency, such as tinting, or hand-painted
> Photoshop corrections, will composite to totally different colors than
> expected.
>
> Another serious problem is that sometimes people want the perceptual
> result of the composite rather than the true result. The most obvious
> example is text and thin lines, drawn in different colors on different
> backgrounds. People expect the same image drawn in different colors
> (such as white on black verses the inverse) will look the same thickness
> and weight. Unfortunately this is not true at all in linear space: the
> black lines look much thinner than white lines. But in sRGB it does
> appear to work, because it is much closer to perceptually linear. Every
> line graphics and work processor program in the world relies on this.
Black and white still looks black and white regardless of the intensity
data being linear or nonlinear. I don't see where the difference would
come from.
"sRGB ... perceptually linear" is too vague to comprehend.
> Also an obvious problem is that using anything other than device space
> will slow down Cairo a huge amount and probably make hardware
> compositing impossible.
Precisely define "device space". Is it linear intensity of effective
camera gamma=1.0, or is it the nonlinear data from an uncompensated camera?
> Russell Shaw wrote:
>
>> For mathematical operations to work right, gamma correction should be
>> done on RGB data so that any nonlinear light-input vs voltage-output on
>> input devices such as cameras are linearized. After the data is transformed
>> in various ways, it should gamma "nonlinearized" to compensate for nonlinear
>> intensity vs voltage of output devices such as CRTs.
More information about the cairo
mailing list