[Openicc] Re: [Lcms-user] Profiling software testing

Graeme Gill graeme at argyllcms.com
Thu Mar 30 03:55:19 PST 2006


Wolf Faust wrote:
> 1. Fault tolerance: I haven't used the latest lprof version with the new 
> spline regression code. But I got some error values from argyl CMS users 
> using the same regression routines?? The extremly low error values reported 
> from ProfileChecker make me a bit sceptical if everything got better. While 
> the new spline regression routine surely brings many advantages, has anybody 
> looked at the fault tolerance of the approximation in practice? That is, how 
> does the new routine behaves if the target scan or reference file is slightly 
> faulty because of whatever reason: serious dust/scratches in the scan, 
> scanner noise, reproduction fault of the target, reproduction fault of the 
> measurement. I wonder if not bad noise is incorporated into the profile 
> seeing user reports from argyll CMS users with mean dE <0.35 on batch average 
> slide film targets. 

I'd advise making some adjustments to the spline code before doing any serious
testing. In particular, you should ensure the following:

   in Argyll/rspl/scat.c line 1119, change
	double rwf[4] = { 0.1, 0.1, 0.1, 0.1 };
   to
	double rwf[4] = { 1.0, 1.0, 1.0, 1.0 };

   For the arguments to fit_rspl():

     Make sure that the default smoothing factor is 1.0

     Make sure that the default avgdev is 0.005 (0.5%)

   (I believe the the above two are mapped to two sliders in LPROF).

otherwise the smoothing will be too low. (These are the changes
I've made for the V0.54 Argyll release, in light of more recent testing.)
	
(Note that the results of LPROF using just the Argyll scatter data
spline code will not result in exactly the same profiles as Argyll
produces, since Argyll also generates per device curves, as well as offering
the option of matrix/shaper profiles, etc.)

> In order to test and compare profilers, I would strongly recommend generating 
> test data that covers most extreme cases. Let me make a suggestion: I am 
> willing to produce five 35mm individually measured slides. I would suggest on 
> the new Velvia filmes with extreme color gamut. One slide is a standard IT8 
> target and the four other slides are >1000 test patches spreaded all over the 
> RGB space and also covering tricky areas (high saturated colors, colors near 
> DMin/DMax,...). 

A mechanism I used to test profiling in Argyll, was to generate device data
with a relatively large number of test patches (say 6000), and then use splitcgats
<http://www.argyllcms.com/doc/splitcgats.html> to split the test set into
two parts, one part for generating a profile, and the other set for verifying
it against (http://www.argyllcms.com/doc/profcheck.html). Various sized splits
were used to test behaviour with different chart sizes. This is very
similar to the type of testing used for genetic algorithms.

This is useful, but not perfect, as it lead me to end up with smoothing
factors that were somewhat too low (hence the subsequent adjustments above!).

> I have used this method here for testing a number issues. If the appoximation 
> is smooth, has a good fault tolerance and the 1000 patches show low error 
> values... than I guess one can assume the profiler does work very good... but 
> this is not easy achive with slide films ;-9 

A wrong patch when generating a LUT based profile will almost always cause
noticeable disturbances, because the nature of the LUT is to try and fit
to each patch. A matrix/shaper type profile will be much more resistant
to such an error. Bumping up the smoothing factors used in the scattered
data fit will also improve robustness against such an error, but at the
expense of color accuracy.

A profile fit report is usually the best way of picking up such a problem.

The best thing of course, is to not have such a wrong patch in the test set!

The smoothing nature of the spline is designed to cope with
random measurement error, and the avgdev parameter was intended as
a means to adjust the smoothness to match the level of reproduction
and measurement error (if it is known, or can be estimated.)

> This struck me as a very good idea and I would like to pursue this.   Is 
> anyone here interested in assisting with this by providing high quality scans 
> from the custom slides that Wolf is willing to produce for this effort?

I don't have a specialized slide scanner, but I do have an Epson 4990,
if that is of any interest.

Graeme Gill.


More information about the openicc mailing list