[Mesa-dev] [Bug 71199] [llvmpipe] piglit gl-1.4-polygon-offset regression

bugzilla-daemon at freedesktop.org bugzilla-daemon at freedesktop.org
Mon Nov 24 18:20:05 PST 2014


https://bugs.freedesktop.org/show_bug.cgi?id=71199

--- Comment #7 from Roland Scheidegger <sroland at vmware.com> ---
(In reply to Laura Ekstrand from comment #6)
> I recently ported this test from Glean to Piglit, and I took another look at
> it just now.  From my understanding, I think the logic of the test is
> correct, but perhaps you are correct that the implementation is touchy
> because of numerical precision issues.
> 
> I found that the terms "ideal" and "actual" may be confusing for the
> purposes of this discussion.  "Actual" is the implementation-specific
> definition of one unit used in the driver's PolygonOffset.  This is
> discussed in the OpenGL 4.5 core spec (Oct. 30, 2014) in section 14.6.5. 
> It's supposed to be derived from the depth buffer precision, but I suppose
> after looking at the spec that it's not always guaranteed to be.
> 
> "Ideal" is found by experimentation and depends on the whole OpenGL
> experience; driver, hardware, depth buffer, etc.  The original authors of
> this test are doing a complicated set of binary searches to converge on a
> numerical solution.  There may be some numerical instability here.
> 
> The output of the test shown above indicates that the llvmpipe driver is
> providing a near plane actual mrd that is about half of what its OpenGL
> context can actually provide ("ideal" mrd).  In other words, one unit of
> offset in a call to glPolygonOffset will not provide enough separation to
> say, draw a decal on top of a plane wing without some stitching (plane wing
> showing through).  A user would have to use a value of two.
> 
> I may be missing something here.

I think for some reason the "Ideal MRD" values aren't quite what we'd expect
them to be. The Actual MRD value (both for near and far) in llvmpipe is 2^-24 -
which you'd think is just right for a 24 bit z buffer (and indeed the test
shows this as nominally 1 bit). Nvidia apparently gives 2^-23, so "losing" one
bit.
But the more important question is why values with such a difference would not
resolve to different z values. Apparently, the test figured out a difference of
2^-23 is required at the near plane and I just can't see why that would be (it
is the same for nvidia too). It just seems like somehow the math isn't accurate
enough. Or maybe this is due to OpenGL's clip space (which is different to
d3d10) so some small inaccuracies creep in. In any case since nvidia apparently
uses 2 bits as mrd this doesn't look like it's fixable. I guess this would only
really be a problem with 24bit z buffers, but in any case we can't change
llvmpipe to use different mrd values (d3d10 won't like it), though if this is
only a problem with GL clip space we could perhaps adjust it depending on that
(which is also controllable in GL with ARB_clip_control).

-- 
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/mesa-dev/attachments/20141125/77098498/attachment.html>


More information about the mesa-dev mailing list