2D antialiased graphics using OpenGL
Allen Akin
akin@pobox.com
Thu, 11 Dec 2003 15:44:52 -0800
On Thu, Dec 11, 2003 at 12:30:54PM +0100, Martijn Sipkema wrote:
| > | > Multipass using accumulation buffering.
| ...
| Yes, I think this is very slow on most hardware and for 2D this is
| overkill. The quality will also be less than when using smooth triangles.
That depends on how well the OpenGL implementation generates coverage
information for the smooth triangle edges. It varies a lot from
implementation to implementation. (I don't mean to discourage anyone
from using it, just be aware that you may need a fallback.)
| > The algorithm is supersampling at a high resolution followed by low-pass
| > filtering with an FIR digital filter.
|
| That's basically what multisampling does also, but without the extra memory,
Typically multisampling cuts other corners as well -- for example, by
performing one texture filtering operation per fragment, rather than one
per sample. Thus multipass can yield higher-quality results for
extremely anisotropic filtering situations.
| which, if I understand it correctly, means you can't use this algorithm to
| draw
| over existing pixels; you'd have to render to a temporary buffer to do that,
| right?
No, you can draw over existing pixels (but remember you have to use
blending methods that can be evaluated incrementally, like simple
addition and subtraction).
| > To get the best results, you need
| > to choose the interpolation factors and noise injection carefully (by
| > selecting the pattern of subpixel offsets) and choose a good filter
| > kernel (by selecting the right pass-band, limiting the noise introduced
| > by filter coefficient quantization, and ordering the blending to
| > minimize or eliminate arithmetic overflow).
|
| This all sounds reasonably complex :). ...
It may sound that way, but it really only has to be figured out once.
It's not something you have to compute for every rendering.
| > Making sure that gamma
| > correction is handled correctly and only after blending is complete is
| > also important.
|
| Handling gamma correction in the framebuffer is a pain. Can't we just
| assume that the framebuffer is gamma corrected?
The problem is that imagery from different sources typically has
different gamma correction already applied. Anything coming from a
video camera (or most digital cameras these days) is gamma corrected, so
linear blending arithmetic can't be used with it. (Yet *another* reason
why compositing is a risky choice as a fundamental operation.) CGI is
typically *not* gamma corrected, because linear blending arithmetic and
color interpolation are necessary to create it.
So the bottom line is that CGI needs to be corrected after rendering is
complete, and other imagery usually needs to be remapped before being
composited. Programmability (particularly using texture lookup to
implement arbitrary mappings) and the gamma-corrected blending in
next-generation hardware will help a lot. Requiring a consistent
display environment (e.g. sRGB) is also worth talking about.
| I doubt handling fonts as geometric objects will result in high quality
| small font rendering as it can't do hinting.
Sure, you can do hinting. The resulting outlines are only useful at one
size and at particular pixel alignments, though. A good research
project would be to try vertex programming to implement hinting and
eliminate the constraints. Vertex programs have some significant
limitations, so it might not be easy, but if it could be worked out it
might be a big win.
| Sure, but relatively simple antiasliased graphics shouldn't require
| programmability.
The future envisioned by the Longhorn designers is one in which lots of
applications will be controlling their appearance by using graphics
programs directly. The hardware designers are already a couple of years
down the road toward making that possible. One of the questions this
group should be answering is whether that's the model we want to adopt,
too.
Allen