[Pixman] [PATCH 7/7] utils.c: Increase acceptable deviation to 0.0064 in pixel_checker_t
sandmann at cs.au.dk
Wed Mar 6 20:35:15 PST 2013
Bill Spitzak <spitzak at gmail.com> writes:
> Søren Sandmann wrote:
>> However, unless using 16 bpp for the internal buffers is really a
>> performance win, there doesn't seem to be any benefits to doing it this
>> way over just dithering the final image. And I definitely want to see
>> convincing numbers before believing that 16 bpp for internal buffers is
>> a performance benefit.
> No, you are confusing dithering with just adding enough noise to hide
> dithering artifacts.
No, I'm talking about keeping internal buffer as 32 bit pixels, and then
dithering the final conversion to 565, as opposed to having 16 bit
internal buffers and dithering gradients to that before compositing.
> Dithering involves adding the pattern to the source, such that a pixel
> of 5.5 is much more likely to end up >= 6 than a pixel of 5.0, and
> thus on average will be brighter.
It won't be brighter if the dither signal is 0 on average.
> Error diffusion will result in the highest quality result, such as
> really silky smooth gradients. However it is a pain to implement on
> the GPU so most schemes add patterns.
A good blue-noise pattern is very close in quality to error diffusion.
More information about the Pixman