[Pixman] [cairo] Gradient rendering seriously broken
sandmann at daimi.au.dk
Mon Aug 9 22:56:18 PDT 2010
"Joshua A. Andler" <scislac at gmail.com> writes:
> On Tue, 2010-08-10 at 04:02 +0200, Soeren Sandmann wrote:
> > Krzysztof Kosiński <tweenk.pl at gmail.com> writes:
> > > Hello
> > >
> > > My port of Inkscape to Cairo is running into more problems. While the
> > > image resampling problem only decreases quality, there are problems
> > > with gradients that cause completely wrong rendering. The problem is
> > > described in more detail (with images) on the bug tracker:
> > > https://bugs.freedesktop.org/show_bug.cgi?id=29470
> > For the aliasing issue, the solution is the same as the image
> > downscaling: add supersampling to pixman. The various image fetchers
> > should fetch a grid of subpixels for each output pixel and then
> > compute the average before storing the result in the output buffer.
> Krzysztof has already expressed a desire to work on this issue (an email
> of 7/21), but his emails to the list have gone unanswered. Lastly he had
> asked where he should be looking in the pixman code for where
> interpolation is handled (an email from 8/3).
Let me answer those two mails then. Here:
is a high-level overview of how the image processing pipeline works in
pixman and how downscaling could fit in.
is a mail that says among other things:
In practical terms, this would mean changing the fetching in
pixman-bits-image.c to fetch subpixels and average them
together instead of fetching whole pixels.
A new function
pixman_image_set_resample_rate (image, rate_x, rate_y)
would be added, where rate_x and rate_y both default to 1. If
they are set to something other than 1, then a different fetch
function is installed that fetches rate_x times rate_y
subpixels per pixel, then averages them together. I don't
think this would be a huge amount of work to do.
Both those mails were linked from the same thread in which Krzysztof
asked his questions. In that thread, I also wrote:
Right now, pixman_image_composite (src, mask, dest) works more
or less like this:
For each destination pixel, a transformed location in the
source and mask images is computed. Then, based on the filter
attributes, interpolated values are computed for those
locations. Finally, those values are composited together with
the destination pixel and written back to the destination.
We need to modify this algorithm to work like this:
For each destination pixel, several transformed source/mask
locations are computed corresponding to a subpixel grid in the
destination pixel. The interpolated values for these locations
are then averaged together before being composited.
And then Krzysztof asks for more details about the algorithm and which
file interpolation happens in.
Frankly, I'm not impressed.
More information about the Pixman