[Mesa-dev] [Intel-gfx] gitlab.fd.o financial situation and impact on services
Jason Ekstrand
jason at jlekstrand.net
Mon Mar 2 04:53:25 UTC 2020
On Sun, Mar 1, 2020 at 2:49 PM Nicolas Dufresne <nicolas at ndufresne.ca> wrote:
>
> Hi Jason,
>
> I personally think the suggestion are still a relatively good
> brainstorm data for those implicated. Of course, those not implicated
> in the CI scripting itself, I'd say just keep in mind that nothing is
> black and white and every changes end-up being time consuming.
Sorry. I didn't intend to stop a useful brainstorming session. I'm
just trying to say that CI is useful and we shouldn't hurt our
development flows just to save a little money unless we're truly
desperate. From what I understand, I don't think we're that desperate
yet. So I was mostly trying to re-focus the discussion towards
straightforward things we can do to get rid of pointless waste (there
probably is some pretty low-hanging fruit) and away from "OMG X.org is
running out of money; CI as little as possible". I don't think you're
saying those things; but I've sensed a good bit of fear in this
thread. (I could just be totally misreading people, but I don't think
so.)
One of the things that someone pointed out on this thread is that we
need data. Some has been provided here but it's still a bit unclear
exactly what the break-down is so it's hard for people to come up with
good solutions beyond "just do less CI". We do know that the biggest
cost is egress web traffic and that's something we didn't know before.
My understanding is that people on the X.org board and/or Daniel are
working to get better data. I'm fairly hopeful that, once we
understand better what the costs are (or even with just the new data
we have), we can bring it down to reasonable and/or come up with money
to pay for it in fairly short order.
Again, sorry I was so terse. I was just trying to slow the panic.
> Le dimanche 01 mars 2020 à 14:18 -0600, Jason Ekstrand a écrit :
> > I've seen a number of suggestions which will do one or both of those things including:
> >
> > - Batching merge requests
>
> Agreed. Or at least I foresee quite complicated code to handle the case
> of one batched merge failing the tests, or worst, with flicky tests.
>
> > - Not running CI on the master branch
>
> A small clarification, this depends on the chosen work-flow. In
> GStreamer, we use a rebase flow, so "merge" button isn't really
> merging. It means that to merge you need your branch to be rebased on
> top of the latest. As it is multi-repo, there is always a tiny chance
> of breakage due to mid-air collision in changes in other repos. What we
> see is that the post "merge" cannot even catch them all (as we already
> observed once). In fact, it usually does not catch anything. Or each
> time it cached something, we only notice on the next MR.0 So we are
> really considering doing this as for this specific workflow/project, we
> found very little gain of having it.
>
> With real merge, the code being tested before/after the merge is
> different, and for that I agree with you.
Even with a rebase model, it's still potentially different; though
marge re-runs CI before merging. I agree the risk is low, however,
and if you have GitLab set up to block MRs that don't pass CI, then
you may be able to drop the master branch to a daily run or something
like that. Again, should be project-by-project.
> > - Shutting off CI
>
> Of course :-), specially that we had CI before gitlab in GStreamer
> (just not pre-commit), we don't want a regress that far in the past.
>
> > - Preventing CI on other non-MR branches
>
> Another small nuance, mesa does not prevent CI, it only makes it manual
> on non-MR. Users can go click run to get CI results. We could also have
> option to trigger the ci (the opposite of ci.skip) from git command
> line.
Hence my use of "prevent". :-) It's very useful but, IMO, it should
be opt-in and not opt-out. I think we agree here. :-)
--Jason
More information about the mesa-dev
mailing list