[Mesa-dev] Gitlab migration
daniel at fooishbar.org
Thu May 24 10:46:53 UTC 2018
I'm going to attempt to interleave a bunch of replies here.
On 23 May 2018 at 20:34, Jason Ekstrand <jason at jlekstrand.net> wrote:
> The freedesktop.org admins are trying to move as many projects and services
> as possible over to gitlab and somehow I got hoodwinked into spear-heading
> it for mesa. There are a number of reasons for this change. Some of those
> reasons have to do with the maintenance cost of our sprawling and aging
> infrastructure. Some of those reasons provide significant benefit to the
> project being migrated:
Thanks for starting the discussion! I appreciate the help.
To be clear, we _are_ migrating the hosting for all projects, as in,
the remote you push to will change. We've slowly staged this with a
few projects of various shapes and sizes, and are confident that it
more than holds up to the load. This is something we can pull the
trigger on roughly any time, and I'm happy to do it whenever. When
that happens, trying to push to ssh://git.fd.o will give you an error
message explaining how to update your SSH keys, how to change your
cgit and anongit will not be orphaned: they remain as push mirrors so
are updated simultaneously with GItLab pushes, as will the GitHub
mirrors. Realistically, we can't deprecate anongit for a (very) long
time due to the millions of Yocto forks which have that URL embedded
in their build recipes. Running cgit alongside that is fairly
low-intervention. And hey, if we look at the logs in five years' time
and see 90% of people still using cgit to browse and not GitLab,
that's a pretty strong hint that we should put effort into keeping it.
> * Project-led user/rights management. If we're on gitlab, project
> maintainers can give people access and we no longer have to wait for the
> freedesktop admins to add a new person. And the freedesktop admins don't
> have to take the time.
Hopefully this alone is worth the effort. ;)
On 23 May 2018 at 20:58, Kenneth Graunke <kenneth at whitecape.org> wrote:
> On Wednesday, May 23, 2018 12:34:11 PM PDT Jason Ekstrand wrote:
>> 1. Go to gitlab.freedesktop.org
>> 2. Click "Sign In / Register" in the upper left-hand corner
>> 3. You already have an account. Click "Forgot your password?", type
>> in your fd.o-associated e-mail, and click "Reset Password". Follow
>> the directions in the e-mail.
> To clarify...this is your actual email, not your alias. I tried
> kwg at freedesktop.org, and that didn't work, but kenneth at whitecape.org
> (the address my fdo alias forwards to) did work.
Correct. Once you've done this, you're also able to associate your
account with external identity providers (e.g. Google account), and
switch on 2FA (strongly recommended). Everyone with a git.fd.o account
has a GitLab account; if the password-reset mail never arrives, it's
likely because your email in the fd.o LDAP system is quite old. Some
people still have Yahoo addresses in there (!). If it isn't working
for you, please ping me via personal email or IRC and I can fix it up,
but note that I'm away Fri-Mon.
This process is only for grandfathered/sunset users: once the
repository is moved, there is no need for any new committers to ever
create a fd.o account: they can sign up and create an account on
GitLab with no admin intervention, then you can just find them in the
UI and give them permissions directly.
On 24 May 2018 at 01:49, Jordan Justen <jordan.l.justen at intel.com> wrote:
> On 2018-05-23 15:16:58, Rob Clark wrote:
>> And I'm not strongly tied to bugzilla vs gitlab's issue tracking.
> Another project I'm involved with had a contingency that swore
> github's "issues" were completely inadequate compared to bugzilla,
> which is to say that I don't think there is consensus on this point.
> For that (smaller) project, I thought github's issues were fine
> compared to bugzilla. For (the larger) Mesa project, I'm not so sure.
Hm, do you mean GitHub or GitLab? GitHub's issue tracking is
atrocious. A couple of years ago (around 8.x), GitLab's issues were
basically just a copy of GitLab. They're now much more powerful,
expressive, and scalable. When I was involved in the GNOME
discussions, I started off advocating for Phabricator purely because
of more powerful issue tracking, but the work they've done on it since
completely changed my view. It's certainly not perfect, but it should
be on par with Bugzilla functionality-wise, and certainly less hostile
If there any feature gaps you can think of, I'd definitely be
interested to hear what they are. The main one I'm aware of at the
moment is being able to create mailing-list accounts, so we can
continue to get mails out to the list and allow people to reply via
mail, in a way everyone's happy with.
I should stress that we do _not_ have a plan right now to kill
Bugzilla or to force-migrate people off it. That being said, it is one
of the services I would very much like to kill. It's a pile of
CVE-ridden Perl which is painful to extend. It doesn't have inbuilt
spam detection/prevention, and when spam does inevitably happen,
resolving that is a lot of clicking and a bit of manual SQL bashing:
just dealing with that is horrendously time-consuming. It also doesn't
integrate with external services (yet another account to create, yet
another set of permissions for admins). Again, it won't be killed
unless and until everyone has decided that GitLab issues provide a
better workflow for their project and has happily migrated. But if
there's something we can do to help with that, I'd like to hear of it,
so I never have to write Perl or manually scrub MySQL of spam ever
On 23 May 2018 at 20:58, Kenneth Graunke <kenneth at whitecape.org> wrote:
>> * [Optional] Built-in CI. With gitlab, we can provide a docker image and
>> CI tasks to run in it which can do things such as build the website, run
>> build-tests, etc. I'm not sure if build-testing Android is feasible but we
>> could at least build-test autotools, meson, scons, and maybe even run some
>> LLVMpipe tests.
> Why can this only be done with gitlab?
Good question. I think this falls into the category of 'folk knowledge
passed on through oral tradition and not really written down'. :\
We had a go at using Jenkins for some of this: Intel's been really
quite successful at doing it internally, but our community efforts
have been a miserable failure. After a few years I've concluded that
it's not going to change - even with Jenkins 2.0.
Firstly, Jenkins configuration is an absolute dumpster fire. Working
out how to configure it and create the right kind of jobs (and debug
it!) is surprisingly difficult, and involves a lot of clicking through
the web UI, or using external tools like jenkins-job-builder which
seem to be in varying levels of disrepair. If you have dedicated 'QA
people' whose job is driving Jenkins for you, then great! Jenkins will
probably work well for you. This doesn't scale to a community model
though. Especially when people have different usecases and need to
install different plugins.
Jenkins security is also a tyre fire. Plugins are again in varying
levels of disrepair, and seem remarkably prone to CVEs. There's no
real good model for updating plugins (and doing so is super fragile).
Worse still, Jenkins 2.0 really pushes you to be writing scripts in
Groovy, which can affect Jenkins in totally arbitrary ways, and
subvert the security model entirely. The way upstream deals with this
is to enforce a 'sandbox' model preventing most scripts from doing
anything useful unless manually audited and approved by an admin.
Again, this is fine for companies or small teams where you trust
people to not screw up, but doesn't scale to something like fd.o.
Adding to these is the permission model, which again requires painful
configuration and a lot of admin clicking. It doesn't integrate well
with external services, and granularity is mostly at an instance
rather than a project level: again not suitable for something like
>From the UI and workflow perspective, something I've never liked is
that the first-order view is very specific pipelines, e.g. 'Mesa
master build', 'daily Piglit run', etc etc. If all you care about is
master, then this is fine. You _can_ make those pipelines run against
arbitrary branches and commits you pick up from MRs or similar, but
you really are trying to jam it sideways into the UI it wants to
present. Again this is so deeply baked into how Jenkins works that I
don't see it as really being fixable.
I have a pile of other gripes, like how difficult their remote API is
to use, and the horrible race conditions it has. For instance, when
you schedule a run of a particular job, it doesn't report the run ID
back to you: you have to poll the last job number before you submit,
then poll again for a few seconds to find the next run ID. Good luck
to you if two runs of the same job (e.g. 'build specific Mesa commit')
get scheduled at the same time.
GitLab CI fixes all of these things. Pipelines are strongly and
directly correlated with commits in repositories, though you can also
trigger them manually or on a schedule. Permissions are that of the
repository, and just like Travis, people can fork and work on CI
improvements in their own sandbox without impacting anything else. The
job configuration is in relatively clean YAML, and it strongly
suggests idiomatic form rather than a forest of thousands of
Jobs get run in clean containers, rather than special unicorn workers
pre-configured just so, meaning that the builds are totally
reproducible locally and you can use whatever build dependencies you
want without having to bug the admins to install LLVM in some
particular chroot. Those containers can be stored in a registry
attached to the project, with their own lifetime/ownership/etc
tracking. Jenkins can use Docker if you have an external registry, but
again this requires setting up external authentication and
permissions, not to mention that there's no lifetime/ownership/expiry
tracking, so you have to write more special admin cronjob scripts to
clean up old images in the registry.
It _is_ possible to bend Jenkins to your will - Mark's excellent and
super-helpful work with Intel's CI is testament to that - and in some
environments it's fine, but after a few years of trying, I just don't
think it's suitable to run on fd.o, and I also don't think it's a good
fit for what Mesa wants to be doing with CI as a community. (That was
much longer than expected, sorry: the wound is still raw, I guess.)
On 24 May 2018 at 07:18, Kenneth Graunke <kenneth at whitecape.org> wrote:
> On Wednesday, May 23, 2018 1:58:14 PM PDT Nicolai Hähnle wrote:
>> On 23.05.2018 21:34, Jason Ekstrand wrote:
>> > * [Optional] Merge-request workflow. With the rise of github, there
>> > are many developers out there who are used to the merge-request workflow
>> > and switching to that may lower the barrier to entry for new contributors.
>> I admit that it's been a while since I checked, but the web-based merge
>> workflows of GitHub and GitLab were (and probably still are) atrocious,
>> so please don't.
>> The tl;dr is that they nudge people towards not cleaning up their commit
>> history and/or squashing everything on the final commit, and that's just
>> a fundamentally bad idea.
>> The one web-based review interface I know of which gets this right is
>> Gerrit, since it emphasizes commits over merges and has pretty good
>> support for commit series.
> One really nice thing is that it actually has a view of outstanding
> patch series, that's properly tied to things getting committed, unlike
> patchwork which is only useful if people bother to curate their series'
> status. I'm struggling to keep up with mesa-dev these days, despite it
> being my day job. Having a page with the series outstanding might make
> life easier for reviewers, and also make it easier for series not to get
> lost and fall through the cracks...
> Mechanically, it also had pretty reasonable support for multiple patch
> series, updating a previous one automatically (IIRC).
> One thing I hated about using Gitlab for the CTS is that every series
> created merges, littering the history...and worse, people got in the
> habit of only explaining their work in the pull request, which became
> the merge commit message. People didn't bother giving individual
> commits meaningful explanations. That made 'git blame' harder, as
> you had to blame then look for the merge...makes bisects messier too...
Oh, for sure. Personally though, I think it's the same: people can and
do use git send-email to send patch series which don't compile until
the final patch, which have fixups embedded in random other patches
down the line, etc. When those come in, the submitter gets asked to
revise, and it should absolutely be the same regardless of whether
it's a MR or a mail patch series. As Tapani mentioned, you don't have
to use the web UI to land either: it's still a Git repository! If you
prefer, you can pull the branch directly to review (IMO this is
already miles better than git-pw/git-am), make whatever changes or
rebase however you want, then just use 'git push'.
One benefit you get from using MRs is that you can use CI (as above)
to do pre-commit tests. Those tests are what we make of them - it's
trivial to set up various build tests, though doing actual run tests
is much more difficult - but having it run automatically is nice. The
Intel kernel team have snowpatch and Jenkins set up to do this, which
is impressive, but again I don't think it's something we can really
run generally on fd.o. OTOH, GitLab CI will run the full battery of
tests on MRs, show you the logs, let you download any generated
artifacts, etc. It's pretty slick, and in fact not even limited to
MRs: it will just run it on whatever branch you push. So you can
replicate what currently happens with Intel CI by pushing a branch
before you send out patches and checking the CI pipeline status for
that branch: in fact slightly easier since you can actually directly
access the instance rather than only getting what's mailed to you.
> One of the motivations for doing this now is that there has been some desire
> to move the mesa website away from raw HTML and over to a platform such as
> sphinx. If we're going to do that, we need a system for building the
> website whenever someone pushes to mesa. The solution that the fd.o admins
> would like us to use for that is the docker-based gitlab CI. Laura has been
> working on this the last couple of weeks and the results are pretty nice
> looking so far. You can check out a preview here:
> https://mesa-test.freedesktop.org/intro.html Using sphinx gives us all
> sorts of neat things like nice text formatting, syntax highlighting, and
> autogenerated searchable navigation. Right now, it's still using one of the
> standard sphinx themes so it looks a bit "default" but that's something we
> can change.
Laura was able to work on having the Mesa website generated in Sphinx
without having to ask me to set up a Python virtual environment, or
configure Jenkins jobs, or whatever. We'd discussed doing this for
Mesa a long time ago, but never did up until now due to the long
ramblings about CI as above. That it just works with no intervention
is a pretty ringing endorsement if you ask me.
Anyway, any changes to the Mesa workflow is a matter for the whole
Mesa community. We're not going to kill the list and force you off the
services you use today, apart from changing the push remote. Doing
that does open a whole lot of possibilities that you _can_ take
advantage of, if it's a good fit for the project. Some of those are
theoretically possible with other tools, but the above is why with so
little fd.o admin time and such a wide slate of hosted projects, we
haven't been able to reliably offer it with those tools.
I'm more than happy to discuss this or anything related, either here
More information about the mesa-dev