[Openicc] XICC specification draft

Craig Ringer craig at postnewspapers.com.au
Tue Jun 28 15:23:35 EST 2005


On Mon, 2005-06-27 at 22:51 -0600, Chris Murphy wrote:

> And even if the video cards  
> were of the same make/model I can't say with confidence the combined  
> analog circuitry from the OS, to the card, through the cable, to the  
> display and within the display will result in sufficiently similar  
> behavior. Maybe they will, maybe they won't.

Many video cards now support multiple heads. I have one such card,
though I'm not currently using a second head. The first connector is
D-SUB VGA, the second is DVI. The colour difference depending on which
output a monitor is connected to (via a DVI->VGA adapter in the case of
the DVI port) is dramatic.

It would probably be even more dramatic for two identical LCDs that
support both DVI and VGA, where one is getting a digital signal and the
other is processing VGA input.

This card configuration is currently very common on mid to high range
video cards, at least consumer ones. I understand many pro 3D cards have
two DVI outputs instead. Nonetheless, even if both displays are on the
same card it is not at all safe to assume identical colour.

> CM in multiple monitor configurations becomes easier if there is  
> something other than the application responsible for display  
> compensation, and the windowing system is device independent.

I tend to agree. The discussion of using the composite manager for
colour correction is very interesting. I'd been wondering if, in the
long term, a X server extension would do the job, but if there's an
alternative that sounds good to me.

In the long term it does seem that, however it's done, pushing at least
some of the colour management responsibility out to the X server or a
tool like the composite manager is probably desirable.

Here's my understanding of the situation:

[For the sake of brevity, I'll use 'X server' when referring to 'X
server and/or composite manager, or whatever is responsible for doing
server-side colour management', below.]

My worry is that if the X server takes, eg, sRGB and transforms it to
the output device colour spaces, many apps that know about the colour
profiles of their inputs are going to have to convert once from the
input colour space to sRGB, then let the X server convert to the output
device's space. My puny knowledge of colour management leads me to worry
about gamut compression and loss of precision due to the double
conversion involved here. Is this likely to be a serious issue in the
real world?

If so, I imagine pro apps would need some way to avoid this. One option
might be to have the option of bypassing the X server's colour
management by requesting that colours sent by the app be used
untransformed. They'd also need to be able to query the X server for
what profiles it's using. Alternately, apps might provide data in its
input colour spaces and with profiles to the server so it could
transform them once before display.

The former gives apps maximum control and makes things easier for
developers of portable applications. That's evidently important - look
at how Photoshop and friends largely ignore the OS's ColourSync CMM on
Mac OS in favour of their own. It forces more complexity into the app,
but for the near future most apps will need built-in CMS support to
handle other platforms and older displays without native CM support
anyway. Centralised configuration is harder, but can be handled through
approaches like the XICC atom, XSettings, etc. On the upside, the X
server doesn't need to know about non-RGB colour spaces or some of the
weirder types of profile used for some inputs. Network transfer sizes
would also be kept down, both for profiles and input data.

The second approach - pushing things into the X server / composite
manager - also seems to have some attractive features. The CMS
implementation can be shared, as can all the CMM memory overheads etc.
Centralised configuration is simplified, and the X server can handle
things like display changes (laptop user plugs in external display, for
example) without bugging applications. The X server / composite manager
could also make sensible decisions about the optimal way to handle the
transforms depending on available resources (eg shaders on high end
video cards, otherwise use CPU vector unit if present, otherwise fall
back on basic implementation).

On the downside, potentially large input colour profiles have to be sent
across the wire, as must large input colour data (think 16 bit per
channel CMYK) and the X server / composite manager must know about a
variety of colour spaces solely to support transforms to device colour.
The XPrint folks would probably be happy, though. There's also the risk
of locking in something too inflexible and being stuck with it,
something that seems to have been carefully avoided in the past.
Finally, a slow X terminal won't want to be doing expensive colour
transforms.

I don't know the "right answer" if there is one. Does anybody here know
of real world examples of either of the two approaches outlined above
that can be looked at? The only one I can think of is OS CMS
implementations, eg ColourSync on Mac OS X, but that doesn't really map
well onto either of the above.

--
Craig Ringer




More information about the openicc mailing list