Summary of the anti-aliasing issue

jonsmirl at gmail.com jonsmirl at gmail.com
Mon Nov 22 09:14:57 PST 2010


Right now the rendering engines are drawing into flat, rectangular
buffers. These buffers are then composited by Wayland using
transforms. The problem with this is that any transform (other than
simple copy) done by Wayland will mess up the anti-aliasing
computations made by the rendering engine (or app).

To fix this there needs to be another compositing mode. In this mode
Wayland tells the rendering ending (or app) the equation of the final
compositing transform.  The rendering engine uses this transform to
draw it's window already transformed the way Wayland needs it. Wayland
then takes these windows and just copies them onto the screen. Note
that the transforms in Wayland don't have to be planar 2D transform,
instead they could distort the window into 3D space. That means the
window buffer handed back to Wayland needs to have a depth buffer. The
depth buffer will make sure everything gets sorted out right when the
window is copied.

This mode is easy for OpenGL apps. Just set the transform and turn on
hardware full-screen anti-aliasing.

Implementing this mode in existing text based apps is much harder
because the way current apps paint text on the screen. These existing
apps contain logic for painting clipped regions. They also generate
font glyphs with the assumption that they won't be transformed by
later stages of the graphics pipe lines.

I think the simplest way to convert existing apps is to rip out the
paint function and replace it with a function that builds a scene
graph reflecting the contents of the paint region. This is just my
opinion and other solutions should be discussed. The scene graph would
be regenerated each time paint is requested. It is passed into the
rendering engine which would convert it to a bitmap with depth.  Font
glyphs are generated in the rendering engine and the rendering engine
would contain the logic needed to make transformed anti-aliased
glyphs. I think this is easier than teaching every app about a depth
buffer. Also, take a second and reflect on the implications of this
for network transparency. Of course you can use the first mode to run
these apps without modification.

The easiest existing apps for converting to this model are web
browsers. Web browsers already use a scene graph internally and some
have OpenGL backends. In that case they need to be taught to take the
transform from Wayland and use it to modify how the bitmaps are
generated. Pretty much everything will cleanly map except for glyph
generation. The code for making glyphs will need to be taught about
transformations. Webkit is the obvious place to start hacking.

-- 
Jon Smirl
jonsmirl at gmail.com


More information about the wayland-devel mailing list