[cairo] What does it take to get a make check to pass with the xcb target, using CAIRO_REF_DIR?

darxus at chaosreigns.com darxus at chaosreigns.com
Fri Jul 8 01:00:53 UTC 2016

On 07/07, Bryce Harrington wrote:
> It would be gratifying to ditch the tests, but it wouldn't be the proper
> thing to do at this stage when the issue hasn't been root-caused (afaik).

I'm replying to this last thing you said first.

When and how does "make check" get run?  My assumption is that nobody ever
runs it, because it always fails, and is therefore useless.  So getting it
into a state where it can possibly pass is worth doing, even if that
requires disabling tests that show current important bugs.  So that people
can be expected to get back into the habit of verifying that it passes
before submitting code.  (At least sometimes, or at least with one of the

Do you disagree?  Is my assumption wrong?

> > bug pointing out what was disabled seems appropriate.
> Yep.

Any suggestions on how these bugs should be reported?  One per failed png
output file (so 14 bugs for the xcb target)?

>  *  XFAIL -- An external failure. We believe the cairo output is perfect,                                        
>  *           but an external renderer is causing gross failure.                                                  

Ah, I misunderstood the meaning of XFAIL.  I have no idea where these
problems are occurring.

> So you might see if deleting the reference image would get it to pass.

I'm unenthusiastic about that idea, because it would be less convenient to
verify output is as expected when the problem is eventually fixed.

> > Delete them from test/Makefile.sources ?
> Checking my notes, the record* tests were indeed some of the ones I was
> also seeing intermittently pass and fail.  The test case itself doesn't
> look that terribly exotic, though, so I hesistate to chalk it up to a
> broken test.  If it is indeed down to variations in the rendering for
> transparency and anti-aliasing, then I wonder if the test could be
> tweaked to either not use it or to be less sensitive in checking it.

The test output looks, to me, like something is significantly wrong.  Like
The spots at the four corners of the octagon are not supposed to be there.

> I poked around a bit in the test runner code.  If we do want to disable
> the test, another option to actually deleting it might be to have the
> test routine, or the test's preamble, return one of the error codes.
> There is CAIRO_TEST_XFAILURE, although only a couple tests use that
> (path-precision.c and in-fill-empty-trapezoid.c).  CAIRO_TEST_UNTESTED
> and CAIRO_TEST_ERROR are more widely used.  Again though, the proper

I feel like CAIRO_TEST_ERROR may be interpreted as a failure?
CAIRO_TEST_UNTESTED seems like a decent option.

> course of action would be to rule out that there is a legitimate issue
> at stake first.  Then, second, identify whether it is caused by an
> external factor.

Certainly.  But how many years has that not happened?  Until it happens,
wouldn't it be better to have a test suite that is useful?

> > Removing these test from test/Makefile.sources allows make check to pass
> > (with an updated set of references):  record1414x.c record2x.c record90.c
> > recordflip.c record.c text-rotate.c
> Updating the references makes an assumption that all of the changes
> being flagged are fixes rather than regressions.  When I last studied
> the test output, I couldn't justify that to myself, but examining each
> case and manually updating the references looked like a daunting task.

I'm not suggesting updating all of the references in the repo.  I'm just
using updated references a minimum useful baseline.  Which is failing.

More information about the cairo mailing list