[Openicc] Drop size calibration

Graeme Gill graeme at argyllcms.com
Mon Jan 28 12:31:17 PST 2008


Robert Krawitz wrote:

> the Ordered New algorithm, between three drop sizes).  The reason for
> the calibration is precisely to get reasonbly good linearization out
> of the box.  People who use corrections will have less to correct for,
> hence less loss of precision.

This seems like a lot of trouble for something that is quite
easily taken care of as part of the calibration.
Being a semi-manual process that results in tables of "magic" numbers,
it's would seem hard to scale to multiple printers and media.

> The basic way all of the dither algorithms (other than Ordered New)
> work is that they dither between two drop sizes, the smaller and
> larger ones that are active at the given input level.  If the dither
> function returns "off", the smaller drop size is used; if it returns
> "on", the larger size is used, so the effect is that the larger drop
> size always gets the better placement.  Ordered New isn't really all
> that different; it just uses 3 drop sizes rather than 2, which may
> help improve smoothness near the transition.  Of course, the smaller
> drop size (or smaller two drop sizes) may be the empty drop...

Right, yes you need to be able to mix three drop sizes to avoid
banding issues as the screen gets close to 100% of one dot size
(which was what I meant by overlap of different dot sizes).

> If the drop sizes aren't correctly calibrated, smooth gradients show a
> variety of odd nonlinearities that sometimes look non-monotonic.  I
> know that they aren't actually non-monotonic, but it does yield some
> weird looking gradients.

Sufficient overlap should smooth some of the worst of this out,
since it's the near 100% fill that has the highest dot gain,
and calibration should be capable of taking care of the rest.

> Another question: in the long run, do you think 16 bits of input
> precision are sufficient, or should we be moving to 31 or 32 bits?
> We have a lot of 16-bit assumptions in the data path, and if we should
> be moving to higher bit depths, it's something we'd need to look at
> closely.

I would imagine 16 bits is more than enough. Given that visually
we can only perceive something like 8 bits if the quantization is perceptually
uniform, particularly for systems that add high frequency noise such as screens,
then 8 bits of "guard band" should be enough to give control over a system that
has a slope of up to 1:256 in the mapping between device input and linear perceived output.
Such extreme behaviour would be quite hard to characterize, as even a
quite detailed test chart (say with 256 test patches per wedge) would
poorly sample a slope 1:256 region - ie. you'd never setup a device that
requires 16 bits in practice, it would simply appear to be too broken.

Getting more than 16 bits out of screen is starting to be hard work too.
In practice I never saw any issues that could be attributed to
using only 14 bits for the raw input to a screen, although 16 bits
is easier to implement and gives an even larger margin.

Graeme Gill.




More information about the openicc mailing list