[Openicc] meta data in test chart

Chris Murphy lists at colorremedies.com
Wed Jan 26 23:31:29 PST 2011



On Jan 26, 2011, at 10:14 PM, Graeme Gill wrote:

> Chris Murphy wrote:
>> On Jan 25, 2011, at 4:49 PM, Graeme Gill wrote:
>> 
>>> I don't think it would be useful in itself. If you construct a V4 ProPhoto profile
>>> that maps the ProPhoto gamut to the PRMG in the A2B, you'll end up with rather dull
>>> images.
>> 
>> I don't think it works that way.The goal isn't that you take the entire ProPhoto RGB
>> space and smash it into the PRMG.
> 
> But what way does it work then ? :-) If you're going to pick some gamut other
> than the colorspace gamut, how do you pick it ? What source gamut do you
> assume ? You can guess, or pick something arbitrary, but that guess will be
> wrong most of the time.

Subjective != arbitrary. It is subjective. That's the nature of the perceptual intent. Some of these colors clearly can never be printed and are just going to have to get clipped. That's part of how it works. You don't map the entire gamut into the PRMG. And even now, you're not mapping all of ProPhoto to output space in a v2 context, but rather just the portion that's assumed by the profile building tool at the time the output device profile was built; and that's certainly wrong all of them time, compared to the v4 + PRMG paradigm which is a dual mapping, from source space to PRMG (the source profile's job), and then from PRMG to output space (the output device profile's job). 

Ann McCarthy and Jack Holm worked a lot on the sRGB v4 PRMG mapping, so either of them can answer this vastly better than I can.

> ie. You're just confirming that as soon as you move to encoding
> colorspaces that are significantly larger than real devices,
> gamuts need to be part of the information flow, just like
> colorspaces themselves.

The PRMG concept isn't tested for anything other than sRGB, so it's unclear how big of a space it could be used for. But my conversations with Jack Holm convinced me that it would be possible to do this for ProPhoto RGB and get better perceptual intent results than we presently do.



> 
>> The effort with mapping sRGB to the PRMG (and back
>> for re-rendering) was not restricted to the idea of taking the entire gamut boundary
>> and mapping it to the PRMG. Some real colors that can be captured, that end up being
>> slightly less saturated in sRGB due to gamut restriction end up printing MORE saturated
>> because the mapping to the PRMG allows for gamut expansion, not just compression,
>> to/from the PRMG. sRGB was probably a harder case because of where its primaries and
>> implied constant hue lines are compared to the PRMG.
> 
> Which is great if you want that expansion, but not so good if you actually
> want to choose what intent you get.

We have three intents, it's not exactly a lot of granularity when it comes to rendering. But I think a slider or additional knobs would make it less intuitive and not substantially improve user satisfaction or the results. So yeah, if you like the perceptual results, great. If not, oh well, please play again. I think there's every reason to believe that v4 + PRMG ProPhoto to v4 + PRMG output device profile would yield better perceptual results than a v2 workflow would. It's just that the v4 + PRMG workflow is comatose, and as such pretty pointless right now.

> 
>> While we call ProPhoto a wide gamut space it isn't that huge. The vast majority of the
>> colors are still real (visible) colors and many of them are capturable which is why
>> Reference Output Medium Metric was concocted in the first place.
> 
> That may be so, but the blue primary in particular is way out there. I think
> it is beyond the spectrum locus (it certainly breaks CIECAM02).

It's not a real color, it shouldn't even be considered in gamut mapping. A hypothetical v4 + PRMG ProPhoto profile would almost certainly exclude imaginary colors from being mapped to the PRMG. So by the time the output device profile got ahold of the image data, it would already be smashed into the PRMG.


>> be ideal. Yet that's the current workflow. Most photographers who use ProPhoto RGB are
>> relatively satisifed by the results they get with existing profiles.
> 
> I'd be fascinated to know how they manage that. The few occasions people
> have brought it to my notice, they had problems with dull looking
> results until they moved to an image gamut workflow.

I haven't experienced this. In fact I have my class take a reference photo containing a wide variety of colors and saturation levels, render the Raw file separately to sRGB, Adobe RGB and ProPhoto RGB. And then produce an inkjet print of each of those three using three rendering intents, so they can see the variation in the results. Very, very little discernible differences between them. It takes a rare material/dye to trigger big differences between the spaces, almost always blue/violet or highly saturated cyans. Everything else just looks the same.

However, I am curious about this image gamut workflow and what the tools are to do this.



> 
>> I think better is possible but again I think the discussion of v4 + PRMG is a long road
>> possibly to no where conversation because there appears to be no support. If there
>> were, I think we're talking about maybe a fine tuning heuristic. It's not like the
>> effort behind sRGB.
> 
> I'm afraid that the PRMG is way down my list, primarily because I see its main
> use case being "office" and un-demanding users. Anybody after precision, quality
> and control won't want to use it.

The whole point of the perceptual intent is to deal with images that contain a lot of out of gamut (for printing) colors. You cannot print them precisely. There is no one right way to print such colors. Perceptual isn't about accurate printing. In such cases accurate printing usually looks wrong. We can't actually colorimetrically reproduce the captured scene on the printed page.

> 
>> And the challenge of HDR is even greater than that of what we're presently talking
>> about with ProPhoto RGB to a print space, both of which are output-referred.
> 
> "input/output" referred are pretty blunt descriptions. ProPhoto cannot actually
> be output referred, since it has an impossible to reproduce primary (blue) :-)

Image-state has more to do with dynamic range than the chosen primaries. ProPhoto has the same dynamic range as sRGB. There are pretty clear descriptions, Ann has a presentation and I even think there's an ISO spec somewhere, that discusses the various image states. Really we should say "capture" or "camera referred" when talking about Raw files. However, for whatever reason scene-referred is taking off instead. To me scene-referred is the full dynamic range of the actual scene, which is substantially greater than what we see, which is greater than what can be captured. All of which is much greater by orders of magnitude from what can be printed.

That ProPhoto has an imaginary blue primary doesn't mean it's not output-referred any more than it's not even scene-referred, seeing as such a color doesn't even exist.


> It's not actually the encoding colorspace choice that makes it input/output referred,
> it's the intent of the rendering done (if any) in converting something to that
> space (although some colorspaces do have viewing condition expectations
> attached to them, a connected but independent aspect.) ie. the way I see it,
> as soon as you start treating a colorspace as an actual device, and re-render the
> image contents to take advantage of that colorspace and work within its
> limitations, it's "output" referred.


Yeah but keep in mind the dynamic range is a much, MUCH bigger issue here in distinguishing between Raw captures and output devices. The color gamut between the camera's capture of a scene, and an inkjet printer is a relatively small difference compared to the dynamic range of capture and print. That difference is very substantial.

What has brought us to 32bpc floating-point mode in Photoshop is not ProPhoto RGB, but rather compositing Raw files to build HDR images.



Chris Murphy


More information about the openicc mailing list