2D antialiased graphics using OpenGL

Martijn Sipkema msipkema@sipkema-digital.com
Thu, 11 Dec 2003 12:30:54 +0100


> | > Drawing front-to-back using GL_SRC_ALPHA_SATURATE.
> |
> | This method requires a destination alpha channel. Does available
hardware
> | have this? ...
>
> On the desktop it's been standard for the last several generations of
> hardware.  Can't speak for the current state of handhelds and embedded
> systems, though I know of at least one major vendor that was taping-out
> a chip with support for it last year.

Well, that would mean that using this method seems viable even if smooth
triangles are likely rendered in software on some cards. I've taken a quick
look at cairo at if I understand correctly it draws back-to-front, which
means
that filling a path would have to be done in a separate buffer and then
copied
to then main buffer using
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA). Having
an auxiliary buffer for this would ease the implementation quite a bit I
think,
but apart from that this method should be fast. I did test drawing using
GL_SRC_ALPHA_SATURATE and MESA/XFree86 (MESA software
rendering) and it was terribly slow...

> | > Multipass using accumulation buffering.
> |
> | I don't think this one is best suited for 2d...
>
> There's nothing intrinsically 3D about it.  I agree that it's probably
> not the best choice, but that's because hardware support for it is
> uncommon.

Yes, I think this is very slow on most hardware and for 2D this is
overkill. The quality will also be less than when using smooth triangles.

> | > Multipass using blending.
> |
> | Could you explain this one a little more?
>
> Sure.
>
> Ignoring the theory for the moment, the implementation is to draw each
> primitive multiple times, offset by subpixel (x,y) amounts each time,
> and scaled by a weighting factor.  Set up the blending function and
> blend equation to either add ("paint") or subtract ("erase") the product
> of the primitive color and the weight.

[...]
> You can see that this won't work with any compositing arithmetic that
> can't be evaluated incrementally.  (Which is one of several reasons that
> I think it's risky to make compositing the fundamental approach to scene
> generation.)  However, it's fast, works well with "incremental quality
> improvement" interactive methods, can produce good-quality results,
> leverages the hardware effectively across old and new graphics cards,
> and works for all primitives.  You can use it to generate a mask for the
> composition operator cases that it can't handle directly.
>
> The algorithm is supersampling at a high resolution followed by low-pass
> filtering with an FIR digital filter.

That's basically what multisampling does also, but without the extra memory,
which, if I understand it correctly, means you can't use this algorithm to
draw
over existing pixels; you'd have to render to a temporary buffer to do that,
right?

> To get the best results, you need
> to choose the interpolation factors and noise injection carefully (by
> selecting the pattern of subpixel offsets) and choose a good filter
> kernel (by selecting the right pass-band, limiting the noise introduced
> by filter coefficient quantization, and ordering the blending to
> minimize or eliminate arithmetic overflow).

This all sounds reasonably complex :). Using the
GL_SRC_ALPHA_SATURATE is probably easier.

> Making sure that gamma
> correction is handled correctly and only after blending is complete is
> also important.

Handling gamma correction in the framebuffer is a pain. Can't we just
assume that the framebuffer is gamma corrected?

> These are all back-burner projects I've been playing
> with, partly to see if it's practical to render all text as geometric
> primitives rather than relying on software rasterization of outlines to
> form pixmaps.  (Handling everything as geometry is good for textured
> primitives, resolution independence, arbitrary scaling and other
> transformations, etc.)

I doubt handling fonts as geometric objects will result in high quality
small font rendering as it can't do hinting.

> Render-to-texture plus fragment programs make many more implementations
> possible, including IIR and separable FIR filtering, but I haven't
> looked into that yet.
>
> One of the points I want to keep emphasizing is that the graphics
> hardware guys are going whole-hog into programmability, and Longhorn in
> particular is going to leverage that as much as possible.  If we focus
> too much on a fixed-function design, we'll be playing catch-up again in
> a couple of years.

Sure, but relatively simple antiasliased graphics shouldn't require
programmability.

> | I think, in theory, multisampling could also sample so as to improve
> | display on an LCD monitor. This is one of the best ways to do
> | antialiasing for 2d and 3d graphics I think, if supported by the
> | hardware.
>
> The hardware guys certainly agree.  And so do I, mostly.  Multisampling
> won't satisfy everyone's requirements for small text quality today; if
> it were to handle that in a future generation, it would definitely be
> the way to go.

One way to draw antialiased fonts is to have the font coverage as an
alpha channel for a texture. This will produce high quality and fast font
rendering I think, but it doesn't take into account the location of the
pixel
components on an TFT screen. I think that using this method of font
rendering with MULTISAMPLE enabled could, if supported by the
hardware, have the output optimised for the screen. I'm not sure though;
sampling outside of the pixel rectangle is allowed, but I'm not sure
seperate
weight factors for the color channels are. Anyway, I thought it would be
nice to have screen optimisation at this level, because I don't think it
belongs on the application side...

Using a high resolution bitmap alpha channel texture for the font and using
multisample rendering for antialiasing might work also, but I doubt this
would
be of the same quality as having the font rasterizer do this.

--ms