[Openicc] Drop size calibration

Robert Krawitz rlk at alum.mit.edu
Sun Jan 27 17:39:48 PST 2008


   Date: Mon, 28 Jan 2008 12:26:17 +1100
   From: Graeme Gill <graeme at argyllcms.com>

   Robert Krawitz wrote:
   > I'm experimenting with another approach to drop size calibration.
   > This uses the new segmented dither algorithm to print stripes using
   > different drop sizes.

   I guess I'm a bit puzzled as to why you need such a calibration.
   By definition a larger drop is going to result in higher density
   than a smaller drop. So as long as the dither/screen progresses
   through the dots in order, the result should be monotonic, even if
   it's not very linear. As long as the input precision to the
   screen/dither is high enough, and the resulting raw screen transfer
   curve is smooth enough, the calibration will linearize the
   result. Of course to avoid banding artefacts it is important to
   crossover from one dot side to the other (ie. there needs to be
   overlap of the different dot sizes), so that the pattern never
   "fills up" at intermediate dot sizes.

It does cross over smoothly between two drop sizes (and in the case of
the Ordered New algorithm, between three drop sizes).  The reason for
the calibration is precisely to get reasonbly good linearization out
of the box.  People who use corrections will have less to correct for,
hence less loss of precision.

The basic way all of the dither algorithms (other than Ordered New)
work is that they dither between two drop sizes, the smaller and
larger ones that are active at the given input level.  If the dither
function returns "off", the smaller drop size is used; if it returns
"on", the larger size is used, so the effect is that the larger drop
size always gets the better placement.  Ordered New isn't really all
that different; it just uses 3 drop sizes rather than 2, which may
help improve smoothness near the transition.  Of course, the smaller
drop size (or smaller two drop sizes) may be the empty drop...

If the drop sizes aren't correctly calibrated, smooth gradients show a
variety of odd nonlinearities that sometimes look non-monotonic.  I
know that they aren't actually non-monotonic, but it does yield some
weird looking gradients.

Another question: in the long run, do you think 16 bits of input
precision are sufficient, or should we be moving to 31 or 32 bits?
We have a lot of 16-bit assumptions in the data path, and if we should
be moving to higher bit depths, it's something we'd need to look at
closely.

-- 
Robert Krawitz                                     <rlk at alum.mit.edu>

Tall Clubs International  --  http://www.tall.org/ or 1-888-IM-TALL-2
Member of the League for Programming Freedom -- mail lpf at uunet.uu.net
Project lead for Gutenprint   --    http://gimp-print.sourceforge.net

"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton


More information about the openicc mailing list