[Mesa-dev] [PATCH] gallium/tests/trivial: fix viewport depth transform

Mathias Fröhlich Mathias.Froehlich at gmx.net
Thu Mar 1 06:33:49 UTC 2018


Hi,

On Thursday, 1 March 2018 04:00:15 CET Roland Scheidegger wrote:
> Am 01.03.2018 um 03:28 schrieb Ilia Mirkin:
> > On Wed, Feb 28, 2018 at 8:42 PM, Roland Scheidegger <sroland at vmware.com> 
wrote:
[...]
> > Is this not the correct behavior? Or is it undefined what happens
> > outside of 0..1?
> I can't really see why clipping to always [0,1] would make sense (since
> you have to clip to near/far anyway too, and the [0,1] range is enforced
> when you call DepthRange already).

As one of you already said, the glDepthRangedNV would have allowed values 
beyond [0, 1]. Also OpenGL 4.2 glDepthRange allowed arbitrary values. But that 
was already taken back with OpenGL 4.3 or 4.4. So my impression was that 
NV_depth_buffer_float is about a dead end that was superseeded by 
ARB_depth_buffer_float.
When I tried to make NV_depth_buffer_float available some years ago we came 
across some funky corner cases in the nvidia extension against the arb 
extension with respect to when to clip to what range and when not. The nvidia 
blob traditionally handled that corners sensibly. IIRC the nvidia blob 
disabled any clamping in the output path from the fragment shader down to the 
depth buffer iff the depth buffer format is GL_DEPTH_BUFFER_FLOAT_MODE_NV 
instead of the float depth buffer enum from the arb extension which are 
distinct GLenum numbers.
Depth precision and thus z-fighting wise the nvidia approach was a great idea. 
But the main OpenGL standard decided to stick with clamping. Also today 
ARB_clip_control enables something comparable with only a about 2bits less 
precision in the complete pipeline including the final float valued depth 
buffer.

best
Mathias




More information about the mesa-dev mailing list