Some thoughts about our tests and the build time
Michael Stahl
mstahl at redhat.com
Tue May 24 13:38:42 UTC 2016
On 17.05.2016 02:37, Markus Mohrhard wrote:
> The first solution that come to my mind is to move a few of the tests
> into a test target that is not executed as part of make. We can still
> make sure that gerrit/CI are executing these tests on all platforms but
> would save quite some time executing them on developer machines.
> Developers that have touched code related to a feature that is part of
> such a test are of course encouraged to execute the tests locally. We
> already did something similar at some point with make vs make slowcheck.
> A good set of tests would be all our export tests as they are by far the
> tests that take most time. The big disadvantage with this approach is
> that our tests are not executed on that many different configurations
> any more. At least we used to find a few problems in the past when tests
> failed on some strange platforms. However this might have become less of
> an issue with all the improvements around crash testing, fuzzing and
> static analyzers.
yes, i think that is definitely the way to go. running unit tests
during "make" only makes sense for *actual unit tests*, which is a tiny
fraction of our CppunitTests (by runtime; it's a somewhat larger
fraction of the number of CppunitTest_*.mks).
most of our CppunitTest are actually system-level or integration tests,
with a long list of linked libraries and (worse) needed UNO component
files, or even the whole services.rdb. a real unit test tests one unit,
so should be happy with one component file :)
e.g. take a look at sw/ooxmlexport_setup.mk, it needs chart2, sc,
starmath, dbaccess, ... why do we even waste time pointlessly listing
all these components there? just use services.rdb.
so the goal should be that the not-actually-unit-tests should be run by
"make check", not by "make".
the only problem with that is that currently most tinderboxes and
jenkins builders don't run "make check", so we need to change that
before we move the tests to not lose important coverage.
> Another solution, well at least a way to treat the symptoms a bit, would
> be to look at the existing slow tests and figure out why they are so
> slow. I did that for the chart2export test already which took about 2
> minutes of CPU time and discovered that it needlessly imported all the
> files again (which saved 30 seconds) and that we have some really
> inefficient xls/xlsx export code (responsible for another 30 seconds). I
> believe that just running VALGRIND=callgrind make and then analysing the
> results would help quite a bit. Again the import and export tests are
> good targets for these attempts and will most likely help with the
> general import and export performance.
that would be great too, although i doubt there's more than a handful of
low-hanging fruit there.
> The last idea is to use more of the XPath assert stuff instead of the
> import->export->import cycle and therefore making the export tests less
> expensive.
well but then you don't actually test the import side any more (that it
can round-trip what was exported), especially for those cases where the
original file is in a different format than the exported file.
More information about the LibreOffice
mailing list