[Pixman] [PATCH] test: Change composite so that it tests randomly generated images

Soeren Sandmann sandmann at daimi.au.dk
Tue Oct 5 10:47:46 PDT 2010

Siarhei Siamashka <siarhei.siamashka at gmail.com> writes:

> On Sunday 07 March 2010, Søren Sandmann wrote:
> > Previously this test would try to exhaustively test all combinations
> > of formats and operators, which meant that it would take years to run.
> >
> > Instead, generate random images and test those. The random seed is
> > based on time(), so we will get different tests every time it is run.
> > Whenever the test fails, it will print what the value of lcg_seed was
> > when the test began.
> >
> > This patch also adds a table of random seeds that have failed at some
> > point. These seeds are run first before the random testing begins.
> > Whenever this tests fails, you are supposed to add the seed that
> > failed to the table, so that we can track that the bug doesn't get
> > reintroduced.
> I don't quite like any nondeterministic random factor in the standard 
> regression tests. Preferably the results of such tests should be
> reproducible from run to run, even if they are not perfect and do not
> provide full coverage.

I just sent a new set of patches that don't have the nondeterministic
behavior. However, considering that several of the tinderboxes here:


are running make check over and over it would be really useful to make
them run different sets of tests each time.

> This is quite important for having a clearly defined formal patch submission
> process (Before submitting a patch, one needs to make sure that the regression
> tests pass. If they don't pass, the problem has to be investigated and patch
> fixed or regression tests updated if needed).
> With the randomness in the tests, patch contributor may end up in different
> confusing situations:
> - regression test fails for him, even if his patch is fine (if the problem was
> introduced by somebody else earlier)
> - regression test passes for him, but fails for the others later (due to the
> bug in the patch). In this case it would be hard to say if the contributor did
> proper job running the regression tests in the first place.

I'm not sure this is a big problem. If the test fails, it would print
out the seed that failed so the contributor can try the test again
without the patch applied. This will allow him to determine whether
the problem was introduced by him or not. If it wasn't, hopefully he
would report the bug.

It's true that some people, if the test fails intermittently for them,
might try submitting anyway hoping that nobody will notice. However, I
don't think most people would do that, and if the test fails so rarely
that they could get away with something like that, any fixed subset of
the test would likely also miss the problem.


More information about the Pixman mailing list