XShape example?

Matthias Hopf mhopf at suse.de
Wed Nov 2 07:23:54 PST 2005


(sending to the list, because it might interest others as well)

On Nov 01, 05 10:29:55 +0900, Carsten Haitzler wrote:
> > > > opengl is quite "iffy" for 2d - especcially if u want to use textures on
> > > > pilys for replacements of pixmaps - different ogl implementations will
> > > > round texture co-ords diddfernly so u may not end up with precisely the
> > > > same output on every driver - and sometimes the output is less than
> > > > desireable.
> > 
> > Not true. There are specific offsets that - together with nearest
> > neighbor interpolation - are pixel exact on every platform.

First, I never said this was easy ;)
I did a lot wrong with respect to this in the past as well...

> let me know what they are!. seriously. try nvidia's opengl vs. DRI based opengl
> on i8xx - and use mipmapped textures for 2d gfx and scaling - and now since you
> are stuck with texture-coords 0.0-1.0 and your image is 343x246 so you have
> been forced to use s 512x256 texture and thus need to adjust your tex coords to
> max out at like 0.669921875 and 0.9609375 at the quad bottom-right, but even if

Of course I verified these numbers only after writing my explanation.
They seem to be right, but be sure to check what I wrote anyway...
BTW - you shouldn't use these double number but rather 343/512.0 and 246/256.0
in your code, but I guess you're doing that in a generic way anyway.

Additionally, I haven't checked i8xx lately, as they happened to not
have enough functionality for us (back at university). There might be a
driver/hardware bug as well.

> you do thsi - with double precision math, it blurrs when rendering 1:1 (draw
> quad with integer verticies so its the right size (0,0, 342,0 342,245 0,245),

As there are tons of wrong descriptions how to render pixel exact, I
explain everything, even if it sounds trivial.
There might be errors as well, of course, please respond if you find something.

1st) Set up your Viewport correctly glViewPort (0, 0, width, height);
     No -1 or so.
2nd) Set up Projection glOrtho (0, width, 0, height, 0, 1);
     Of course you can use perspective, if you're doing everything right
     and choose the right z coordinate for rendering.
3rd) If you *absolutely* want to make sure, choose nearest neighbor
     interpolation. However, I have seen perfect results with bilinear as
     well, if you don't, it is a (major) driver bug.
3rd) Choose texture coordinates (0, 0) to (width/texwidth, height/texheight)
     No -1, no 1/(2*width), nothing.
4th) Render a quad (0, 0) to (width, height)
     No -1, nothing.
     Due to coordinate interpretation you should be guaranteed to get a
     quad filling out the pixels between (0, 0) and (width-1, height-1)
     (inclusive), with all pixels hitting texel positions exactly.

Explanation:
This is what the texture looks like internally:
* are the texel sample points

  *   *   *   *
|   |   |   |   |
0  .25 .5  .75  1

That's why some guides tell you you have to use 1/(2*width),1/(2*height)
and 1-1/(2*width),1-1/(2*height) as texture coordinates for rendering
the full texture. But in that case you would have to change the
rendering coordinates as well, but you hit a corner case there that
doesn't explicitely state which pixels are actually rendered.

The screen coordinates look similar (1D only):
* are the pixel sample points

  *   *   *   *
|   |   |   |   |
0   1   2   3   4


So if you now render a (0)-(3) 'quad', with texture coordinates (0)-(3/4),
you end up with:

  '   *   *   *   '   '
|   |   |   |   |   |   |
    0           3           Pixel coords
    0          3/4          Texture coords

     1/8 3/8 5/8            Texture sample coords

So you get a quad that is guaranteed to exactly occupy the three pixels,
and the generated texture sample coordinates happen to sit exactly on
the texture sample point positions. As the numbers can be described
exactly in the binary system, there is no round-off error as well.

> if u move to using nv/ect rectangle textures it works fine, but you lose any
> form of downscaling filtering - so you use npot textures and ansiotropic
> filterign and you are back to the blurry mipmap case. all the "accurate" 2d

You're using Linear-Mipmap-Linear? The NVidia has a not-exactly
trilinear interpolation scheme, where it does bilinear look for most of
the time. As your texture coordinates where almost(!) pixel exact, it
defaults to using the same mipmap level alone. The i8xx driver might
choose to do something different.

If you're using Linear-Mipmap-Nearest, the driver is free to choose any
mipmap level he finds appropriate, in your case that could have been the
downsampled version. What chipset have you been using? I think older
intel hardware wasn't capable of doing trilinear lookups.

Hope this helps

Matthias

-- 
Matthias Hopf <mhopf at suse.de>       __        __   __
Maxfeldstr. 5 / 90409 Nuernberg    (_   | |  (_   |__         mat at mshopf.de
Phone +49-911-74053-715            __)  |_|  __)  |__  labs   www.mshopf.de



More information about the xorg mailing list