[Piglit] [PATCH] tests/spec/arb_robustness/draw-vbo-bounds.c: add clipping

Jose Fonseca jfonseca at vmware.com
Mon Nov 5 06:48:02 PST 2012



----- Original Message -----
> Jose Fonseca <jfonseca at vmware.com> writes:
> 
> > ----- Original Message -----
> >> On 10/29/2012 05:34 PM, Roland Scheidegger wrote:
> >> > Am 30.10.2012 00:22, schrieb Brian Paul:
> >> >> On 10/29/2012 05:05 PM, sroland at vmware.com wrote:
> >> >>> From: Roland Scheidegger<sroland at vmware.com>
> >> >>>
> >> >>> Make sure clipping is needed sometimes, and more often use
> >> >>> small
> >> >>> index
> >> >>> counts,
> >> >>> to expose issues and excercise more paths in mesa's draw
> >> >>> module.
> >> >>> ---
> >> >>>    tests/spec/arb_robustness/draw-vbo-bounds.c |    4 ++--
> >> >>>    1 file changed, 2 insertions(+), 2 deletions(-)
> >> >>>
> >> >>> diff --git a/tests/spec/arb_robustness/draw-vbo-bounds.c
> >> >>> b/tests/spec/arb_robustness/draw-vbo-bounds.c
> >> >>> index 4351ac9..c780a3a 100644
> >> >>> --- a/tests/spec/arb_robustness/draw-vbo-bounds.c
> >> >>> +++ b/tests/spec/arb_robustness/draw-vbo-bounds.c
> >> >>> @@ -95,7 +95,7 @@ random_vertices(GLsizei offset, GLsizei
> >> >>> stride,
> >> >>> GLsizei count)
> >> >>>
> >> >>>        for (i = 0; i<  count; ++i) {
> >> >>>            GLfloat *vertex = (GLfloat *)(vertices + offset +
> >> >>>            i*stride);
> >> >>> -        vertex[0] = (rand() % 1000) * .001;
> >> >>> +        vertex[0] = (rand() % 1000) * ((rand() % 1000) ?
> >> >>> 0.001 :
> >> >>> 1.0);
> >> >>>            vertex[1] = (rand() % 1000) * .001;
> >> >>>        }
> >> >>>
> >> >>> @@ -145,7 +145,7 @@ static void test(void)
> >> >>>        vertex_count = 1 + rand() % 0xffff;
> >> >>>
> >> >>>        index_offset = (rand() % 0xff) * sizeof(GLushort);
> >> >>> -    index_count = 1 + rand() % 0xffff;
> >> >>> +    index_count = rand() % 10 ? 1 + rand() % 0xffff : 1 +
> >> >>> rand()
> >> >>> %
> >> >>> 0x7ff;
> >> >>>        min_index = rand() % vertex_count;
> >> >>>        max_index = min_index + rand() % (vertex_count -
> >> >>>        min_index);
> >> >>>
> >> >>
> >> >> Randomness in tests can be OK, but in this case wouldn't you
> >> >> want
> >> >> to
> >> >> explicitly test some specific coordinates and indexes to make
> >> >> sure
> >> >> the
> >> >> corner cases are hit?
> >> >>
> >> >> -Brain
> >> >
> >> > Well I'm not sure it's worth the trouble note the test is run
> >> > 1000
> >> > times
> >> > so probability is very very high that these cases are hit
> >> > anyway.
> >> 
> >> What happens when one of us gets a bug report that this test
> >> fails,
> >> but we're unable to reproduce it?
> >
> >> If I had been paying attention, I
> >> would
> >> have already objected to the initial use of rand in this test...
> >
> > I'm not sure if the main objection here is rand itself, or random
> > data at all.
> >
> > We could use some predictable pseudo number generator instead rand.
> >
> >
> > But the purpose of this test was precisely to stress the driver by
> > throwing random data.
> >
> > Note that it is not part of any test lists, never was, and does not
> > need to be. I do understand the drawbacks of random data.
> >
> > But on the other hand, one cannot honestly claim to have a robust
> > driver without doing random tests. Hand written test cases will
> > never
> > be enough unless one commands the hands of an army of energetic
> > monkeys.
> >
> > I think he appropriate would be to have a separate "stress" test
> > list
> > for this sort of tests.
> >
> > If you don't want random tests on piglit tree at all then I'll
> > happily
> > move them elsewhere. I just thought that being an opengl test and
> > piglit a gl test suite, it would be the right home for it. But it
> > barely depends on piglit infrastructure, so really don't care much
> > where the source is.
> 
> To me, as a general rule, tests with randomized data are of negative
> value.  If they pass, I don't care.  If they fail, then I see it in
> my
> fail list, and I go look into the test and see that it's failing on
> iteration 98 of some random set of data, and doesn't show me the
> failure
> that happened or what is different in iteration 98 compared to 1-97,
> and
> I have to go hack up the test to see what's even happening in
> iteration
> 98 with that particular seed.  Eventually I give up because it's just
> wasting my time, and go through the process again a month later.
> 
> In this particular instance I'm thinking of triangle-rasterization.
>  I
> really should have NAKed that one when it came by: randomization,

> C++,

What's wrong with C++? Piglit always had tests written in C++, and C++ can save a lot of typing when writing tests. It seems silly to me to restrict ourselves to C.

> wrong formatting, and fails to run-and-show-you-the-error.

I'll try to fix this. It's a pity this was not brought up initially though, as it would had been much easier to get it fixed then.

Jose


More information about the Piglit mailing list