2D antialiased graphics using OpenGL

Martijn Sipkema msipkema@sipkema-digital.com
Fri, 12 Dec 2003 00:05:04 +0100


> > > I mean, you make a reasonable argument.  If framebuffers were linear
> > > intensity values rather than voltage values, a lot of things would
> > > be easier, including being able to turn 'naive' hardware
> > > implementations of algorithms such as compositing to be correct.
> > > All of this is true.
> > >
> > > But I don't buy the argument that OpenGL expects a linear
> > > framebuffer. [...]
> > 
> > Actually, I don't think OpenGL really cares about gamma, but the way
> > blending and smooth shading is defined is only correct for a gamma
> > corrected frame buffer. It is _impossible_ to do rendering in a
> > nonlinear color space without a loss in performance, e.g. write only
> > algorithms would now need to be read-write. It is IMHO better to just
> > use a higher resolution framebuffer.
> 
>   Well up until now I have avoided the higher-bit framebuffer issue.  I
> kind of assumed that if you had a 10 bit per channel pixel format, that
> it would be linear and all of the advantages hold.  Similarly if we had
> a floating point framebuffer.  I don't know if either is true.
> 
>   But using the LUTs and screwing all of the 2D code that currently
> assumes sRGB seems wrong and broken, just like using 8 bit per channel
> linear intensity pixels is a horrible loss in quality.

I don't think an application should assume an uncorrected framebuffer. An
application doing OpenGL rendering can't asume other than a corrected
framebuffer. And I think 2d code should use OpenGL for rendering also.

So, I think 2d code assuming sRGB is broken and I doubt the loss in quality
is really that bad...

Also, an imaging application will probably want to do more than just
gamma correction and will need to know the monitor details and
framebuffer gamma.

--ms