Xegl lives!

Zack Rusin zrusin at trolltech.com
Wed May 25 00:39:05 PDT 2005


On Tuesday 24 May 2005 23:39, Allen Akin wrote:
> On Tue, May 24, 2005 at 05:15:04PM -0400, Jim Gettys wrote:
> | Believe it or not, GL doesn't do everything needed for 2D
> | graphics.... Subpixel text, for one, though Allen Akin was
> | scratching his head to see if he could figure out how.
>
> Text is plenty doable, but the crux of the matter is what rendering
> model you choose to expose.
>
> For example, "draw an outline glyph with subpixel positioning and
> antialiasing using a Porter-Duff composition operator" may cause
> problems, because very high quality antialiasing of arbitrary
> geometry requires multipass rendering (even with high-end
> accelerators), and the rules of arithmetic don't permit some of the
> useful composition operators to be applied in multiple passes.  To
> get around this you need to render to an intermediate buffer and then
> composite the intermediate buffer with the target.  Doing this
> under-the-covers can be slow (Switch rendering to intermediate buffer
> and composite results for each glyph individually?) or
> difficult/fragile (How much memory should you dedicate to
> intermediate buffers in order to render multiple glyphs at a time?
> What about caching?).

As far as text rendering goes, Microsoft took an interesting approach 
with Sub-Pixel ClearType in Avalon. Basically they have glyph cache 
(they start of course as outlines and then get converted to bitmaps and 
cached). On DirectX 10+ (since that one is going to be the first to 
support 1bpp surface composition and filterting to 8bpp) they'll be 
cached on the card, and they claim it will take about 2MB and have a 
very high reuse rate. On lower versions of DirectX the alpha bitmaps 
are all done and cached in the software (and they claim the overhead is 
about 5MB per process and little bit less reuse). 
The individual glyphs are then composited onto a run black 
and white bitmap with the OR operation. The result gets filtered onto 
an alpha bitmap with a box kernel. Then iirc the alpha bitmap is 
overscaled 3x horizontally  to support ClearType. Color, brush and 
transformation are applied and finally the result is blended onto a 
surface. Also iirc they actually do use composited alpha for each 
channel for blending. On DirextX9 and up pixel shaders are used for 
ClearType RGB blending. It looks like a pretty decent approach. The 
alpha bitmap can be a temporary one existing only on the video card. I 
think for us the bigger problem might be the fact that drivers usually 
implement convolution filters by multiple sampling of a texture (and 
since with this approach we'd need filters with kernel up to 8x8, it 
just won't be usable).

Zack

-- 
All those who believe in telekinesis, raise my hand.



More information about the xorg mailing list