[cairo] downscaling capabilities

Frédéric Plourde frederic.plourde at polymtl.ca
Mon Apr 21 09:58:06 PDT 2008


Guys,

  we're talking about all these technical considerations about pure 
integer scale and stuff, but let's not forget that correct mipmap 
generation puts a heavy charge on the CPU because it has to produce 
"prefiltered" versions of the source image.

OpenGL does it that way :
   From the first image specified, and possibly after a "power-of-two" 
padding, repeatedly scale-down by a factor of 2 using a "low pass filter 
strategy", as I mentionned earlier in this thread. More precisely, 
openGL simply takes the mean-value of every 2X2 texel blocks to produce 
every texels of the mipmap level underneath... and so forth until we 
reach size 1X1 (or NX1 if rectangulair mipmap was specified, where in 
that case, openGL averages each time only on 2 texels)

By the way, using a naive "row-column deletion" algorithm (a true 
integer scale technique that simply "skips" one column every two 
columns, for a 2X minification, for instance) for specifying all the 
underlying mipmap levels is adding no significant quality to what we 
already have in cairo.

I'm currently implementing mipmapping generation for cairo... BUT, I'm 
concerned about whether or not we can afford the initial mipmap 
generation cost in terms of work load.

see next email.
Best,
-fred-




Bill Spitzak a écrit :
>
>
> Owen Taylor wrote:
>> On Sat, 2008-04-19 at 12:16 -0700, Bill Spitzak wrote:
>>>> Owen Taylor a écrit :
>>>>> I'm going to directly disagree here and suggest that for
>>>>> CAIRO_FILTER_GOOD, the right algorithm is:
>>>>>
>>>>>  - Scale down by factors of 2 repeatedly until you are less than 2 
>>>>> times
>>>>>    the target scale factor
>>>>>  - Bilinearly sample from the result
>>>>>
>>>>> There are certainly disadvantages to this to this approach:
>>>>>
>>>>>  - Works worse with non-uniform scales (that contract more in one
>>>>>    direction than others)
>>> If you are going to do this each time it is quite possible to use a 
>>> different power of 2 horizontally than vertically.
>>
>> For pure scales, yes. But the transform could also contract the image
>> at a 45 degree angle to the axes.
>
> It would scale to the length of the transform of the vector (0,1) and 
> the length of the transform of the vector(1,0). IE the scaled image 
> would not be rotated, it would be transformed to be about the same 
> size as the result, just not rotated or skewed.
>
> I'm thinking this will work. The biggest question is whether it will 
> actually be faster. As I see it the advantages are that the 
> integer-scale pass and the bilinear pass are so much simpler that they 
> could be programmed to be far faster.
>
> Disadvantages are that it seems to require a temporary buffer for the 
> scaled image, the fact that two passes are made over the data, and 
> that it will think about pixels during the first pass that may be 
> clipped off for the final image. Also it is not going to do a great 
> job with skew, but most filtering acceleration fails at that anyway.
>
>> Yep. Locality is a bit more of an issue, overflow will be an issue at
>> some point (especially with mmx).
>
> If the scale goes over some factor it could switch the algorithim to a 
> new one that does not overflow. For instance in some code I am using a 
> scale of less than 1/64 just uses the 1/64 filter with the centers 
> spaced further apart than 64 (ie it loses a lot of the pixels). At 
> this scale it is so tiny that this seems acceptable.
>
>  I don't know offhand how a scale
>> down of 2.7 times looks with bilinear interpolation from a 2-times 
>> downscaled copy vs. a 3-time downscaled copy... if there is
>> a significant improvement or not.
>
> I'm pretty certain you want the nearer-to-1 integer scale. The current 
> bilinear matches this for scales down to .5, and I don't think the 
> artifacts become objectionable until .5.



More information about the cairo mailing list