[poppler] Poppler reg tests
jric at chegg.com
Mon Sep 12 12:52:57 PDT 2011
This is fantastic, Carlos! Thank you!
I'll also pass the info along to our developer who is creating reg-test
for pdftohtml. Please announce when your python version is ready, and
we'll make a pass at integrating.
On 9/12/11 12:39 PM, "Carlos Garcia Campos" <carlosgc at gnome.org> wrote:
>as you all probably know, our current regression test suite are some
>custom shell scripts and a lot of pdf files that Albert has collected
>for some years. During the last few days I have been working on
>improving the regression test, converting Albert's scripts into a
>small python program to allow everybody to run their own tests. We are
>not going to upload pdfs to the repo, so we still depend on Albert
>to make sure a change doesn't actually introduce regressions. We will
>only provide the program so that anybody could use it with their own
>So, how does it work?
>usage: poppler-regtests [options ...] command [command-options ...] tests
> --utils-dir: The directory where poppler utils are. The script is
> thought to be used from the git repo, so by default it's ../utils.
> -b, --backends: The backends to tests, for now only cairo, splash,
> text and postscript backends are supported. You can provide a comma
> separated list of backends to test. By default it uses all backends.
> create-refs: This is the command to crate the reference files.
> run-tests: This command runs the tests, comparing with references
> previously created with create-refs command
>For every pdf file to test a new directory is created in the
>references dir with the name of the pdf file, creating also other
>directories found in the tests dir, for example, for the given tests
>the following dirs will be created in references dir:
>The references directory is /tmp/refs by default, but any other dir can
>be used with --refs-dir command line option.
>Reference files are stored in the references dir of every document
>prefixed with the backend-name. The following references file can be
> - Checksum files: A file for each backend containing the md5 sum of
> every backend dependent file.
> - Backend dependent files: A png file for every document page for
> cairo and splash backends, a single ps file for postscript backend
> and a single txt file for text backend. These can be removed to save
> disk once checksums files are created with the option
> --checksums-only. If you have enough disk it's recommended to keep
> backend dependent files, so that they can be used later by run-tests
> command to generate diffs for failing tests.
> - Crashed files: A file with the extension .crashed indicating the
> test crashed while creating reference images.
> - Failed files: A file with the extension .failed indicating the test
> failed to run. It didn't crash, but the command returned an error
> status. The file contains the error status returned by the command
> used to create the reference files.
> - stderr files: A file with extension .stderr indicating the command
> used to create the reference files wrote something to stderr.
>$ ls -R refs
>cairo.md5 postscript.md5 splash.md5 text.md5
>cairo.crashed cairo.stderr postscript.crashed postscript.ps
>postscript.stderr splash.crashed splash.stderr text.md5
>Reference files are not regenerated unless --force command line option
>is used. You can update your reference files just adding a new file to
>the tests dir and running create-refs again.
>This command creates reference files in the output dir and compares
>the results with the reference files in references dir.
>A test can produce the following results:
> - PASS: Test passed as expected.
> - FAIL: Test was expected to pass but it failed, differences were
> found in checksum files.
> - CRASH: Test was expected to pass but it crashed.
> - DOES NOT CRASH: Test was expected to crash, but it didn't. It's not
> possible to know whether test passed or not.
> - DOES NOT FAIL: Test was expected to fail with an exit error status,
> but it didn't. It's not possible to know whether test passed or
> - FAIL (status error <status>): Test was expected to pass but it
> failed, the test program returned with an exit status error.
> - PASS (Expected crash): Test was expected to crash and it
> crashed. Even though the test crashed, it's considered to pass
> since it's not a regression.
> - PASS (Expected fail with status error <status>): Test was expected
> to fail and it failed. Even though the test failed to run, it's
> considered to pass since it's not a regression.
>Result files for passing test are automatically removed from output
>dir unless --keep-results command line option is used. Result files of
>tests that failed are not removed. If backend specific reference files
>were not removed from references dir, --create-diffs command line
>option can be used to create diff files for tests that fail. The
>format of the diff file depends on the backend, and it's not currently
>supported by postscript backend.
>At the end a summary of the test results is printed, something like:
>Total 16 tests
>15 tests passed (93.75%)
>1 tests failed (6.25%): ./tests/annots/annots_move.PDF (splash)
>4 tests have stderr output (25.00%): ./tests/sample.pdf
>Tests run in 10 seconds
>The equivalent to what Albert did with his scripts would be something
>./poppler-regtest --backends=splash,cairo create-refs --refs-dir=./refs
>./poppler-regtest --backends=splash,cairo run-tests --refs-dir=./refs
>I think that's all the important stuff. Sorry for the long mail, but
>I think it'll make it easier for you to test and review the patch. For
>more information use poppler-regtst --help. I'll add all this info to
>the README file once the patch is reviewed.
>There are still things to do, like adding an option to provide documents
>to skip, html backend support, and I'm sure there are a lot of bugs
>too. So, feel free to comment, or even better provide patches :-)
>Carlos Garcia Campos
>PGP key: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x523E6462
More information about the poppler