[Piglit] [PATCH 2/4] Add test to verify glSampleCoverage with multisample fbo

Anuj Phogat anuj.phogat at gmail.com
Wed Jun 20 14:01:56 PDT 2012


On Sat, Jun 16, 2012 at 7:54 AM, Paul Berry <stereotype441 at gmail.com> wrote:
> On 8 June 2012 14:43, Anuj Phogat <anuj.phogat at gmail.com> wrote:
>>
>> This test varifies that the coverage value set by glSampleCoverage()
>> decides
>> the number of samples in multisample buffer covered by an incoming
>> fragment,
>> which will receive the fragment data.
>
>
> This test seems to assume that the coverage specified by glSampleCoverage()
> will be applied exactly, and uniformly, to each pixel.  In other words, if
> the coverage value is 0.25, then exactly 25% of the samples for each pixel
> will be covered.  There are two problems with this assumption.  First,
> there's no guarantee that the coverage value will map to an integer number
> of samples (for example, if the coverage value is 0.25 but the FBO has an
> oversampling factor of 2, that would mean that 0.5 samples per pixel should
> be covered).  Second, the GL 3.0 spec does not require the implementation to
> apply the coverage uniformly.  In fact, it encourages the implementation not
> to.  On p243, the GL 3.0 spec says:
>
> "No specific algorithm is required for converting the sample alpha values to
> a temporary coverage value. It is intended that the number of 1’s in the
> temporary coverage be proportional to the set of alpha values for the
> fragment, with all 1’s corresponding to the maximum of all alpha values, and
> all 0’s corresponding to all alpha values being 0. The alpha values used to
> generate a coverage value are clamped to the range [0, 1]. It is also
> intended that the algorithm be pseudo-random in nature, to avoid image
> artifacts due to regular coverage sample locations. The algorithm can and
> probably should be different at different pixel locations. If it does
> differ, it should be defined relative to window, not screen, coordinates, so
> that rendering results are invariant with respect to window position."
>
> It seems clear that the spec writers intended to allow (but not require) the
> implementation to produce a dithering effect when the coverage value is not
> a strict multiple of 1/num_samples.  Since no one has, to my knowledge,
> implemented glSampleCoverage() in Mesa yet, it would be nice if we could use
> this test to see if other implementations do this kind of dithering or not.
> (FWIW I *think* that my nVidia reference system doesn't do any dithering)
> If the implementation does a dithering effect, then the coverage values you
> use in this test (0, 0.25, 0.75, and 1.0) will only show dithering when
> oversampling by a factor that is not a multiple of 4 (e.g. 2x
> oversampling).  It would be nice if we could observe the presence or absence
> of dithering at all multisampling factors.
>
As no OpenGL implementation is bounded to implement dithering, I
avoided using coverage values which may result in dithering.
But I agree it would be nice to observe the presence of dithering. I
can see it on my NVIDIA machine.

> Also, your compute_expected() function seems to assume that no dithering
> occurs, and that coverage produces perfect blending for all multisampling
> factors.  This means that it fails with 2x oversampling on my nVidia system.
>
> Here's a possible way that we could modify the test so that we could see
> dithering behaviour if it's present: instead of drawing 4 rectangles, draw
> 2*N+1 of them (where N is the number of samples per pixel in the
> framebuffer).  Let the coverage value in each of these rectangles be i/2N,
> so for example if N=4, then the coverage values would be 0.0, 0.125, 0.25,
> 0.375, 0.5, 0.625, 0.75, 0.825, 1.0.  N+1 of the rectangles have coverage
> values that result in an integer number of samples (0.0, 0.25, 0.5, 0.75,
> and 1.0 in this case)--have the test verify that those rectangles get the
> exact expected color.  The other N rectangles (0.125, 0.375, 0.625, 0.825)
> are just for human inspection so that we can look for dithering.
>
> Note: if you decide to try this suggestion, make sure that you determine the
> value for N by calling
> glGetRenderbufferParameteriv(GL_RENDERBUFFER_SAMPLES), because it's possible
> that the implementation will give you a framebuffer with more samples/pixel
> than you requested.
>
yes. This sounds good to me.
>>
>>
>> V2: Add the testing for glSampleCoverage() with coverage mask invert
>> enabled
>>    / disabled. Resolve the multisample FBO to a single sample FBO in place
>>    of doing it on window system framebuffer. This fixes the color buffer
>>    failures on NVIDIA due to sRGB.
>>
>> Note: This test fails for depth buffer on AMD and NVIDIA. Depth buffer is
>> not
>>      modified even if the incoming multisample fragment passes the depth
>> test
>>      and has a coverage = 1.0.
>
>
> The reason this test is failing on nVidia is because you're making an
> assumption about how MSAA depth buffers work that isn't true.  You're
> assuming that when a depth buffer is resolved, depth values from each sample
> get averaged together, just like in a color buffer.  What really happens is
> that a single depth value is chosen (probably the depth value from sample
> #0).
>
I agree that implementation may pick any one of the samples and use its
depth value while resolving the depth buffer. But, I have a doubt related to
the scenario described below:
If I draw a full screen rectangle in a multisample depth buffer with
full coverage
(1.0) and a depth value of 0.4 (random value). I assume that all the
samples of each pixel inside the rectangle will have the same depth value
of 0.6. While resolving implementation is free to pick any one of the samples
which anyways are carrying same value. So, resolving the depth buffer by
blitting to default framebuffer should give us the resolved depth value = 0.6.

Testing this on NVIDIA proves that I'm clearly missing something here. Could
you help me find an answer?

> If you want to test that glSampleCoverage works properly with depth buffers,
> then after drawing into the depth buffer, you'll have to do a drawing
> operation that causes a change to the color buffer based on the contents of
> the depth buffer.  For example, you could do this:
>
> 1. Clear the depth buffer to a far depth value.
> 2. Clear the color buffer to black.
> 3. Enable GL_SAMPLE_COVERAGE.
> 4. Draw a rectangle having a near depth value.  This should cause some of
> each pixel's samples to attain the near depth value, and some to remain at
> the far depth value.
> 5. Disable GL_SAMPLE_COVERAGE.
> 6. Draw a white rectangle at an intermediate depth value.  This will cause
> all samples that have the far depth value to be painted white, and all
> samples that have the near depth value to stay black.
> 7. Blit the color buffer to a single-sampled FBO.  This should cause the
> white and black samples of each pixel to mix, forming gray.
> 8. Verify that the appropriate shade of gray was produced.
>
> However, having said all that, I'm not certain it's necessary.  The point in
> the graphics pipeline where glSampleCoverage() takes effect is the point
> where the pipeline determines which samples are covered by the primitive.
> That's before the point where the output of the fragment shader is split
> into the various buffers.  So I think it's very likely that if
> glSampleCoverage() works properly for color buffers, it will work properly
> for depth buffers too.  I would be content to just remove the depth buffer
> part of this test.
>
> If would rather be on the safe side and test the depth buffer as well, then
> I would encourage you to think about whether it's necessary to test the
> stencil buffer as well.
>
I really liked your suggested technique to do the depth buffer testing. For now,
I'm interested to push the test case with just color buffer testing. I'll add an
appropriate comment describing the reason to skip depth and stencil buffer
testing. I may later add the testing for depth and stencil buffers.

[snip]

>> +void
>> +piglit_init(int argc, char **argv)
>> +{
>> +       if (argc < 3)
>> +               print_usage_and_exit(argv[0]);
>> +       {
>> +               char *endptr = NULL;
>> +               num_samples = strtol(argv[1], &endptr, 0);
>> +               if (endptr != argv[1] + strlen(argv[1]))
>> +                       print_usage_and_exit(argv[0]);
>> +       }
>> +
>> +       piglit_require_gl_version(30);
>> +       glClear(GL_COLOR_BUFFER_BIT);
>
>
> This call to glClear() is unnecessary, since you also call glClear() at the
> beginning of piglit_display().  It's also confusing, because drawing calls
> shouldn't be in piglit_init() anyhow.
>
>>
>> +       piglit_ortho_projection(pattern_width, pattern_height, GL_TRUE);
>> +
>> +       /* Skip the test if num_samples > GL_MAX_SAMPLES or num_samples =
>> 0 */
>> +       GLint max_samples;
>> +       glGetIntegerv(GL_MAX_SAMPLES, &max_samples);
>> +       if (num_samples > max_samples ||
>> +           num_samples == 0)
>> +               piglit_report_result(PIGLIT_SKIP);
>
>
> Skipping if num_samples > max_samples makes sense, because we only want the
> test to run for supported sample counts.  But skipping if num_samples == 0
> is weird, since num_samples is specified by the user when invoking the
> test.  My recommendation would be to let the test go ahead and execute even
> if num_samples == 0, because it's sometimes handy in debugging to invoke an
> MSAA test with num_samples == 0 just to see what will happen (even if the
> test is expected to fail).
>
 I agree. I will follow up with an updated patch for the test case.

Thanks
Anuj


More information about the Piglit mailing list