[Mesa-dev] Mesa CI is too slow
daniel at fooishbar.org
Mon Feb 18 21:36:05 UTC 2019
On Mon, 18 Feb 2019 at 18:58, Eric Engestrom <eric.engestrom at intel.com> wrote:
> On Monday, 2019-02-18 17:31:41 +0000, Daniel Stone wrote:
> > Two hours of end-to-end pipeline time is also obviously far too long.
> > Amongst other things, it practically precludes pre-merge CI: by the
> > time your build has finished, someone will have pushed to the tree, so
> > you need to start again. Even if we serialised it through a bot, that
> > would limit us to pushing 12 changesets per day, which seems too low.
> > I'm currently talking to two different hosts to try to get more
> > sponsored time for CI runners. Those are both on hold this week due to
> > travel / personal circumstances, but I'll hopefully find out more next
> > week. Eric E filed an issue
> > (https://gitlab.freedesktop.org/freedesktop/freedesktop/issues/120) to
> > enable ccache cache but I don't see myself having the time to do it
> > before next month.
> Just to chime in to this point, I also have an MR to enable ccache per
> runner, which with our static runners setup is not much worse than the
> shared cache:
> From my cursory testing, this should already cut the compilations by
> 80-90% :)
That's great! Is there any reason not to merge it?
> > Doing the above would reduce the run time fairly substantially, for
> > what I can tell is no loss in functional coverage, and bring the
> > parallelism to a mere 1.5x oversubscription of the whole
> > organisation's available job slots, from the current 2x.
> > Any thoughts?
> Your suggestions all sound good, although I can't speak for #1 and #2.
> #3 sounds good, I guess we can keep meson builds with the "oldest supported
> llvm" and the "current llvm version", and only the "oldest supported"
> for autotools?
We could have Meson building all the LLVM versions autotools does for
not much overhead at all. At the moment though Meson builds 3 and
autotools builds 6, which isn't bring us increased code coverage.
> You've suggested reducing the amount that's built (ccache,
> dropping/merging jobs) and making it more parallel (fewer jobs), but
> there's another avenue to look at: run the CI less often.
> In my opinion, the CI should run on every single commit. Since that's
> not realistic, we need to decide what's essential.
> From most to least important:
> - master: everything that hits master needs to be build- and smoke-tested
> - stable branches: we obviously don't want to break stable branches
> - merge requests: the reason I wrote the CI was to automatically test MRs
> - personal work on forks: it would be really useful to test things
> before sending out an MR, especially with the less-used build systems
> that we often forget to update, but this should be opt-in, not opt-out
> as it is right now.
> Ideally, this means we add this to the .gitlab.yml:
> - master
> - merge_requests
> - ci/*
> Until this morning, I thought `merge_requests` was an Enterprise Edition
> only feature, which I why I didn't put it in, but it appears I was wrong,
> (Thanks Caio for reading through the docs more carefully than I did! :)
> I'll send an MR in a bit with the above. This will mean that master and
> MRs get automatic CI, and pushes on forks don't (except the fork's
> master), but one can push a `ci/*` branch to their own fork to run the
> CI on it.
> I think this should massively drop the use of the CI, but mostly remove
> unwanted uses :)
It depends on the definition of 'unwanted', of course ... I personally
like the idea of having a very early canary in the coalmine, and
building it into peoples' workflows as quickly as possible. If a more
sensible job split could reduce compilation time by 30-40%, and using
ccache could drop the compilation overhead by a huge amount as well,
that sounds like more than enough to not need to stop CI on personal
forks. Why don't we pursue those avenues first, rather than
restricting the audience?
More information about the mesa-dev