[Openicc] Drop size calibration

Robert Krawitz rlk at alum.mit.edu
Mon Jan 28 16:50:50 PST 2008


   Date: Tue, 29 Jan 2008 07:31:17 +1100
   From: Graeme Gill <graeme at argyllcms.com>

   Robert Krawitz wrote:

   > the Ordered New algorithm, between three drop sizes).  The reason for
   > the calibration is precisely to get reasonbly good linearization out
   > of the box.  People who use corrections will have less to correct for,
   > hence less loss of precision.

   This seems like a lot of trouble for something that is quite easily
   taken care of as part of the calibration.  Being a semi-manual
   process that results in tables of "magic" numbers, it's would seem
   hard to scale to multiple printers and media.

The way I do it is just to calibrate printers (or really, families of
printers) and sets of drop sizes in this way.  I normally use a single
paper (Epson Glossy Photo Paper) for this purpose.  Again, it's a
compromise.

   > The basic way all of the dither algorithms (other than Ordered
   > New) work is that they dither between two drop sizes, the smaller
   > and larger ones that are active at the given input level.  If the
   > dither function returns "off", the smaller drop size is used; if
   > it returns "on", the larger size is used, so the effect is that
   > the larger drop size always gets the better placement.  Ordered
   > New isn't really all that different; it just uses 3 drop sizes
   > rather than 2, which may help improve smoothness near the
   > transition.  Of course, the smaller drop size (or smaller two
   > drop sizes) may be the empty drop...

   Right, yes you need to be able to mix three drop sizes to avoid
   banding issues as the screen gets close to 100% of one dot size
   (which was what I meant by overlap of different dot sizes).

I think you're right here.  We'll need to figure out error
diffusion/EvenTone type algorithms that use three drop sizes also.

   > If the drop sizes aren't correctly calibrated, smooth gradients
   > show a variety of odd nonlinearities that sometimes look
   > non-monotonic.  I know that they aren't actually non-monotonic,
   > but it does yield some weird looking gradients.

   Sufficient overlap should smooth some of the worst of this out,
   since it's the near 100% fill that has the highest dot gain, and
   calibration should be capable of taking care of the rest.

Yup, 50% overlap seems to work well.

   Getting more than 16 bits out of screen is starting to be hard work
   too.  In practice I never saw any issues that could be attributed
   to using only 14 bits for the raw input to a screen, although 16
   bits is easier to implement and gives an even larger margin.

As you point out, it's easier just to use native machine precision.
For an ordered (array-based) screen, the next step from 16 bits (32
bits) isn't practical, although we could use 32 bits for internal
computations and use a 16 bit screen at the end.  Beyond that, it's
really just more memory and CPU, and memory and CPU are cheap these
days :-)  Even cache is becoming cheaper :-)

The real memory hog with Epson printers, particularly at high
resolutions, is the weave buffers.  These are proportional to the
horizontal and vertical resolutions *and* the number of nozzles,
nozzle spacing, and horizontal spacing of drops as they're printed.
That grows a whole lot faster than just the horizontal width, even if
we were to go to 128 bits.

-- 
Robert Krawitz                                     <rlk at alum.mit.edu>

Tall Clubs International  --  http://www.tall.org/ or 1-888-IM-TALL-2
Member of the League for Programming Freedom -- mail lpf at uunet.uu.net
Project lead for Gutenprint   --    http://gimp-print.sourceforge.net

"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton


More information about the openicc mailing list