[CREATE] Lens correction database
Øyvind Kolås
islewind at gmail.com
Tue Sep 11 11:25:53 PDT 2007
On 9/11/07, Andrew Zabolotny <zap at homelink.ru> wrote:
> > If you want to support all kinds of pixel formats, including alpha,
> > you need to take premultipled / non premultiplied alpha into account
> > as well, since treating each component
> > the same on non-premultiplied (normal) RGBA data, leads to color
> > mixing artifacts.
> Ugh, wouldn't it be enough if the library would be able to skip the
> alpha channel data? Do you want to say that every plugin for gimp 3.x
> will have to deal with all this stuff independently?
Nope, GEGL operations that do any form of pixel value mixing
(interpolation, blurring and similar) will request already pre
multiplied buffers which means that red, green, blue and alpha can be
treated exactly the same. The situation you fear is the current
situation in GIMP 2.x where every plug-in, the downscaling code for
the view projection internal functions etc have to deal with it
independently. In very old versions of GIMP you would get halos around
objects when applying for instance a Gaussian blur, due to "colors" of
transparent pixels around objects being mixed in with the colors of
the objects, or semi-transparent pixels being given a too large
weight.
> So basically I see here RGB8, RGB16, RGB32, RGB-FP32 and RGB-FP64. The
> chroma, luma, lab and all kinds of funny color spaces are not of
> particular interest to lensfun because all lens models deal with RGB
> colors, as I stated earlier.
Yes, but for the esoteric formats I would be able to deal with it
myself provided I could generate floating point displacement buffers
(with relative displacement or absolute coordinates).
>> The only issue I see with this is that for some very severe
>> distortions the center of the displaced coordinates is not enough
>> if the resulting transformation involves significant downscaling,
>> this is probably not a concern for most uses though.
>
>Can't understand this. Do you mean there are lenses that have several
>centers of distortions?
No, but interpolation isn't the correct way to re sample the image
data for all cases.
If the distortion leads to an expansion of the region in the image
interpolation is the correct thing to do and all we need to know is
the source coordinates. But if the
distortion leads to a shrinkage of the image, interpolation is not
sufficient, we also need to know how large the footprint of doing a
reverse transform of the destination
pixels coordinates are. Imagine using bi linear interpolation, bi
linear interpolation only uses a 2x2 neighbourhood around the
coordinates for computing the interpolated value. By imagining a
severe distortion that leads to scaling this neighbourhood of the
image down to 33% you'll see that one at least would need a 3x3
neighbourhood to re sample the data. One can probably either ignore
this issue since it is mostly minor and negligible for most lenses, or
transform the corners of the destination pixel and examine the
distance between the source coordinates.
/Øyvind K.
--
«The future is already here. It's just not very evenly distributed»
-- William Gibson
http://pippin.gimp.org/ http://ffii.org/
More information about the CREATE
mailing list