[Pixman] [cairo] Supersampling - 2nd attempt

Bill Spitzak spitzak at gmail.com
Mon Aug 16 16:50:15 PDT 2010



Krzysztof Kosiński wrote:
> 2010/8/16 Bill Spitzak <spitzak at gmail.com>:
>> The problem I am having is that this does not match how filtering of
>> transforms is done in any image processing software I am familiar with.
>>
>> This is how all software I am familiar with works, this replaces the three
>> steps you show above:
>>
>>        - Figure out the INVERSE transform
>>
>>        - For each output pixel, the x, y, and inverse transform figures out
>> a weighing factor for every pixel in the input image. These are weights for
>> input PIXELS, not weights for input "points".
>>
>>        - Multiply input pixels by these weights and sum to get output pixel.
> 
> Pixman actually stores the inverse transform. For example, an image
> scaled to 1/4 its width and height has the following Pixman matrix:
> 4 0 0
> 0 4 0
> 0 0 1
> 
> There are two possible approaches:
> a) determine which input pixels are in the sampling region; sample the
> filter kernel for each input pixel to compute weights
> b) precompute weights by sampling the filter kernel; compute
> interpolated color values corresponding to points at which the filter
> kernel was sampled; weigh the interpolated values using precomputed
> weights
> 
> You seem to be advocating a) - is that correct?

Yes.

> For simple kernels like box or tent, the performance will be very
> similar for a) and b), but for high quality filters like Gaussian or
> Lanczos, it's less expensive to compute interpolated values of the
> pixels and weigh them with precomputed samples of the filter kernel
> than it is to sample the kernel for each subpixel.

Okay I think I see. Our software instead precomputes the interpolated 
values of the *filter* rather than the image, pretty much saving the 
effort in the opposite way you are trying. To find the filter weight for 
a pixel, the position in the filter is rounded to the nearest 
precomputed sample position and that entry is used.

The same table is used for all filter sizes, it has 64 bins per pixel 
for the size used for the identity transform. The result does not add to 
1.0 so a normalization step is done, I think this is also precomputed 
for each size and small errors are ignored.

 > I'm not sure what's
 > the difference in quality between those approaches. I could try
 > prototyping both of them.

I suspect there is going to be a phasing problem with your approach if 
the scale is slightly different that 1/integer. At some places the 
kernel samples line up with the pixel centers and at others it is 
between them and thus blurring or lowering the contrast. This will 
probably look like moire patterns.

>> For a more concrete example, our image transforms consist of translating the
>> output x/y into an input axis-aligned rectangle that is as close as possible
>> to the area of the output pixel inverse-transformed to the input.

> How does it work for skews? I'd like to avoid interating over pixels
> inside an axis-aligned rectangle, because it could give us terrible
> worst-case performance, for example with large skews.

The chosen rectangle is equal in *area* to the sample region, not the 
bounding box. However the resulting filtering quality is pretty poor for 
large skews. I am wondering if an extra integer skew factor could be 
added, this would not slow anything down as it simply is added to the 
stride value, and the two-pass 1-D filters could continue to be used.



More information about the Pixman mailing list