<div dir="ltr"><div>Hi,<br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, May 31, 2022 at 2:07 PM Stephan Bergmann <<a href="mailto:sbergman@redhat.com">sbergman@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">So the general advice would be to ignore occasional failed builds (which <br>
might not only fail due to spurious test failures, but also because e.g. <br>
a build breaker got submitted by accident). If some specific tests <br>
cause enough builds to fail to make that approach impractical, those <br>
tests should get fixed. Or, as a last resort, get disabled for <br>
known-failing build scenarios.<br>
<br>
I don't think -k would be a good solution, as it would make it harder to <br>
meaningfully interpret the generated data.<br></blockquote><div><br></div><div>Ok, so let me see if I understood that correctly :<br><br>*if* lcov reports were to be generated automatically on a regular basis again, then the desired behaviour would be to just let the build fail/stop (whatever the reason, gcov related, spuriously failure, developer mistake), don't generate a lcov report when it does fail, and if it fails often enough within a certain timeframe people will investigate and determine whether the failing check should be fixed or disabled, and a new gcov/lcov report should only be generated if the build and 'make check' both succeed.</div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Occasionally failing tests are a well-known problem for LO (e.g., <br>
witness the "Jenkins / CI update: tests that failed more than twice in <br>
last seven days" section in the weekly ESC minutes---many of those are <br>
apparently spuriously failing tests), and there is no reason to assume <br>
that your gcov builds would not also occasionally be affected by that.<br></blockquote><div><br></div><div>Although I realize - as you explained - that there are occasional test failures for all Jenkins builds, I am also not sure if this is what is going on here specifically. In order to better determine if we are dealing with occasional or structurally failing tests for the mentioned 3 tests specifically (UITest_solver, CppunitTest_sccomp_solver, CppunitTest_sccomp_swarmsolvertest), I did some more testing (fresh git pull and build), and when I do a gcov build and make check, these 3 fail for me reliably (and they succeed for a non-gcov 'regular' build). All the other tests succeed, but these 3 keep failing for me. I ran them multiple times in a row, and they keep failing for each run. So while I realise that this might not be 100% proof, I also get the strong impression that for these 3 specifically, we are not talking about 'occasionally' failing tests, but about structurally failing tests when run on a gcov build, and this needs to be looked into. Of course, the fact that they fail reliably for me on every run I tried says nothing about what makes that so: for example, it can still be timeout related, as I can imagine (but did not test) a gcov enabled test taking longer to complete than a test run on a 'regular' build. The reason that I keep going on about this is not because I am purposefully trying to be annoying, but rather because I personally strongly think/feel (but I may be mistaken) that in order for people to be interested in reviving automated lcov reports at all, it first needs to be made sure that the automation works 'most of the time' (or at least as much as the other Jenkins builds), and not fail on every single run right at the start.<br></div><div><br></div><div><br></div><div><br></div><div>- Maarten<br></div><div> </div></div></div>