[cairo] RFC: More accurate color conversion
Søren Sandmann
sandmann at cs.au.dk
Tue Oct 8 10:44:43 PDT 2013
Carl Worth <cworth at cworth.org> writes:
> There's one part of your position that I'm unclear on. You say:
>
>> But the output integers have inherent values, where 0x0000 corresponds
>> to 0.0, 0xffff corresponds to 1.0 and 0x0001 corresponds to
>> 1/65535.0. That is not an arbitrary convention -- when these integers
>> are stored in framebuffers or .PNG files they are (at least in
>> principle) the assumption is that they correspond to these values. The
>> reason for choosing the sample points in this way, is that it is
>> important to be able to represent the particular values 0.0 and 1.0
>> exactly.
>
> I agree that 0x0000 must correspond to 0.0 and that 0xffff must
> correspond to 1.0.
>
> But then you simply assert that the conversion from integer to
> floating-point must be f(i) = i/65535.0. What's the justification for
> this?
If we add the additional assumption that the intervals between
corresponding to successive integers must be a constant length, then it
follows from f(0x0000) = 0.0 and f(0xffff) = 1.0 that f(i) = i /
65535.0. The easiest way to see that is to consider a conversion to
2-bit integers instead:
0.0 1.0
|-----------------------------------------------|
0b00 0b01 0b10 0b11
This shows the convertion i/3.0, and it's pretty clear that if you move
any of the four integers to some other position with the interval, then
you will have to violate at least one of the three assumptions. I can
probably come up with a formal proof of that if necessary.
The assumption that the conversion from integer to floating point is
simply a division by N is also stated without justification on Owen's
page:
Going from int values a=[0,N] to real values x=[0,1] has a fairly
obvious algorithm:
x = a/N
It's worth stating eplicitly that cairo's current algorithm does make
sense if you treat the integers as simply symbols that you can give any
interpretation you like. Handwaving a bit, in that case you want each
symbol to represent as much of the input range as possible:
0.0 1.0
|------------------------------------------------|
0b00 0b01 0b10 0b11
and so you divide the input range into equal sized segments and position
each symbol in the middle. But the downside of that is that the symbols
0b00 and 0b11 now do not correspond to 0.0 and 1.1 anymore. It is only
because we (and everyone else) want to be able to represent those values
exactly that my proposed algorithm is better.
> I'm not necessarily arguing against the change here. I'm just pointing
> out that the explanation for the change seems to depend on an implicit
> assumption without much justification.
>
> Meanwhile, I am curious what brought about your interest in changing
> this function. What actual problem are you trying to solve here?
The main motivation is that I want to extend pixman's test suite with
more tolerance based tests, where a reference pixel is computed in
floating point and then compared to pixman's output. It's highly
desirable here that the _reference_ computation doesn't introduce its
own errors.
As an example, a tolerance based test for an operation such as a2r2g2b2
IN a2r2g2b2 works like this:
1. Convert the 2-bit source pixel to floating point
2. Convert the 2-bit destination pixel to floating point
3. Compute reference result in floating point math
4. Add +/- DEVIATION to get upper and lower bounds
5. Convert upper and lower bounds to 2-bit integers
6. Verify that pixman's 2-bit output is within [lower, upper]
The nice thing about this scheme is that DEVIATION is a tolerance on the
*internal* computation, and so is independent of the size of the final
pixels. It is step 5 that accounts for the error introduced when
converting to low-precision integers, and the problem is that the
current floating-point-to-integer algorithm in effect mandates a bit of
additional error in the pixman implementations.
Søren
More information about the cairo
mailing list