'make check' fails when building with gcov code coverage

Stephan Bergmann sbergman at redhat.com
Wed Jun 1 15:22:02 UTC 2022


On 01/06/2022 16:57, Maarten Hoes wrote:
> Ok, so let me see if I understood that correctly :
> 
> *if* lcov reports were to be generated automatically on a regular basis 
> again, then the desired behaviour would be to just let the build 
> fail/stop (whatever the reason, gcov related, spuriously failure, 
> developer mistake), don't generate a lcov report when it does fail, and 
> if it fails often enough within a certain timeframe people will 
> investigate and determine whether the failing check should be fixed or 
> disabled, and a new gcov/lcov report should only be generated if the 
> build and 'make check' both succeed.

Yes, that reflects my thoughts.

> Although I realize - as you explained - that there are occasional test 
> failures for all Jenkins builds, I am also not sure if this is what is 
> going on here specifically. In order to better determine if we are 
> dealing with occasional or structurally failing tests for the mentioned 
> 3 tests specifically (UITest_solver, CppunitTest_sccomp_solver, 
> CppunitTest_sccomp_swarmsolvertest), I did some more testing (fresh git 
> pull and build), and when I do a gcov build and make check, these 3 fail 
> for me reliably (and they succeed for a non-gcov 'regular' build). All 
> the other tests succeed, but these 3 keep failing for me. I ran them 
> multiple times in a row, and they keep failing for each run. So while I 
> realise that this might not be 100% proof, I also get the strong 
> impression that for these 3 specifically, we are not talking about 
> 'occasionally' failing tests, but about structurally failing tests when 
> run on a gcov build, and this needs to be looked into. Of course, the 
> fact that they fail reliably for me on every run I tried says nothing 
> about what makes that so: for example, it can still be timeout related, 
> as I can imagine (but did not test) a gcov enabled test taking longer to 
> complete than a test run on a 'regular' build. The reason that I keep 
> going on about this is not because I am purposefully trying to be 
> annoying, but rather because I personally strongly think/feel (but I may 
> be mistaken) that in order for people to be interested in reviving 
> automated lcov reports at all, it first needs to be made sure that the 
> automation works 'most of the time' (or at least as much as the other 
> Jenkins builds), and not fail on every single run right at the start.

I agree with you that the status quo would not be useful (and don't 
think you are annoying in any way).

I still strongly (but naively, without digging into the code in any way) 
assume that those tests are not failing for you all the time because 
they would deterministically fail for gcov builds in general, but "just" 
because your gcov builds are sufficiently slow.

Ideally, the authors of those tests could rework them so that they don't 
depend on performance characteristics that certain builds cannot meet.



More information about the LibreOffice mailing list