2D antialiased graphics using OpenGL

Allen Akin akin@pobox.com
Wed, 10 Dec 2003 10:26:53 -0800


On Tue, Dec 09, 2003 at 08:12:03PM +0100, Martijn Sipkema wrote:
| >...
| > Drawing front-to-back using GL_SRC_ALPHA_SATURATE.
| 
| This method requires a destination alpha channel. Does available hardware
| have this? ...

On the desktop it's been standard for the last several generations of
hardware.  Can't speak for the current state of handhelds and embedded
systems, though I know of at least one major vendor that was taping-out
a chip with support for it last year.

| > Multipass using accumulation buffering.
| 
| I don't think this one is best suited for 2d...

There's nothing intrinsically 3D about it.  I agree that it's probably
not the best choice, but that's because hardware support for it is
uncommon.  

| > Multipass using blending.
| 
| Could you explain this one a little more?

Sure.

Ignoring the theory for the moment, the implementation is to draw each
primitive multiple times, offset by subpixel (x,y) amounts each time,
and scaled by a weighting factor.  Set up the blending function and
blend equation to either add ("paint") or subtract ("erase") the product
of the primitive color and the weight.

To make this most efficient for 2D, use vertex buffer objects and
preferably vertex programs to minimize the geometry-processing load.
However, even graphics cards several generations back are plenty fast
enough to handle large amounts of geometry with the fixed-function
pipeline.  The NVIDIA Ti4200 in the box I'm using to write this note
will render 24M small textured fully-transformed 3D triangles per
second, and it's the low-end card from the previous generation.
(Pricewatch says you can buy it for $69 today; there are also cheaper
options that might be sufficient for 2D.)

You can see that this won't work with any compositing arithmetic that
can't be evaluated incrementally.  (Which is one of several reasons that
I think it's risky to make compositing the fundamental approach to scene
generation.)  However, it's fast, works well with "incremental quality
improvement" interactive methods, can produce good-quality results,
leverages the hardware effectively across old and new graphics cards,
and works for all primitives.  You can use it to generate a mask for the
composition operator cases that it can't handle directly.

The algorithm is supersampling at a high resolution followed by low-pass
filtering with an FIR digital filter.  To get the best results, you need
to choose the interpolation factors and noise injection carefully (by
selecting the pattern of subpixel offsets) and choose a good filter
kernel (by selecting the right pass-band, limiting the noise introduced
by filter coefficient quantization, and ordering the blending to
minimize or eliminate arithmetic overflow).  Making sure that gamma
correction is handled correctly and only after blending is complete is
also important.  These are all back-burner projects I've been playing
with, partly to see if it's practical to render all text as geometric
primitives rather than relying on software rasterization of outlines to
form pixmaps.  (Handling everything as geometry is good for textured
primitives, resolution independence, arbitrary scaling and other
transformations, etc.)

Render-to-texture plus fragment programs make many more implementations
possible, including IIR and separable FIR filtering, but I haven't
looked into that yet.

One of the points I want to keep emphasizing is that the graphics
hardware guys are going whole-hog into programmability, and Longhorn in
particular is going to leverage that as much as possible.  If we focus
too much on a fixed-function design, we'll be playing catch-up again in
a couple of years.

| I think, in theory, multisampling could also sample so as to improve
| display on an LCD monitor. This is one of the best ways to do
| antialiasing for 2d and 3d graphics I think, if supported by the
| hardware.

The hardware guys certainly agree.  And so do I, mostly.  Multisampling
won't satisfy everyone's requirements for small text quality today; if
it were to handle that in a future generation, it would definitely be
the way to go.

Allen