Flathub system setup

Alexander Larsson alexl at redhat.com
Wed May 10 18:47:56 UTC 2017


A couple of weeks ago i set up the basic flathub system, and I though
I would write down some details of how it work, both to let people
know how it works, and to start a discussion on how we can take
it from here to a more maintained system.

So, for the basics, Scaleway donated some credits towards flathub,
which let me set us a few machines. These are all intel machines,
based on the scaleway centos7 image, with flatpak installed from the
epel repo[1].

Based on this I have set up the following machines:

flathub-master: VC1S
  Runs an instance of buildbot, installed via virtualenv.
  Configuration is at: https://github.com/flathub/buildbot-config

flathub-builder-1 & flathub-builder-2: VC1L
  Runs instance of buildbot-worker, installed
  via virtualenv. These are the primary build
  machines for intel builds, one for i386 and
  one for x86-64.

flathub-repo: C2S with 450 GB disk
  This is the primary storage for the flathub repo.
  Runs an nginx server to host the repo, but this
  is not the primary access to it.
  Also runs a buildbot-worker, installed via
  virtualenv.

flathub-front: VC1L machine with 200G disk.
  Main server of flathub.org 
  Runs nginx configured as a cacheing proxy for the both the repo and
  the build-bot web-service, as well as some static pages
  for flathub.org.

Additionally we have 2 aarch64 machines at codethink that
Tristan set up that runs the buildbot-worker for the arm 32bit
and 64bit builds.

Each application is stored in a repository in the flathub organization
on github.org, which is set up to send a json request to the buildbot
on flathub-master whenever there is a new commit.

When buildbot sees a new commit on the master branch it queues a new
build of the app. The build master then queues build jobs for workers
on all arches, which check out and build the apps, do various checks
and then tar up the resulting ostree repo and upload it to the master.
When the master has results from all arches it queues a new job for the
worker on the flathub-repo machine.

This job downloads all the build results, untars them into a single
repo,
signs the commits, rsyncs the new apps into the real repository and
runs
build-update-repo on that, which will generate static delta files, as
well as update and sign the appstream and summary files.

The GPG key for the signature is stored encrypted on the disk of the
flathub-repo machine, but I have configured a gpg-agent instance to
have
essentially infinite cache lifetime, so it is unencrypted in ram so
we can auto-sign the repo. This is not ideal, but its the best I could
do without manual signing or dedicated hardware.

All the machines except the arm builders are inside the same network,
so ideally I would like them to not have public IPs. However, the way
scaleway works means that if I do this they can't talk to the outside
network, which we need. I think this is solvable by setting up NAT on
the flathub-front machine, but I have not really looked into this.

Other than that this whole thing seems to be working pretty nicely,
but it is really not anything more than a bunch of manually set up
machines, which seems quaint in this day and age. I am no sysadmin
though, so it would be nice if someone who knew something about this
could take it over from here...

[1] https://copr.fedorainfracloud.org/coprs/amigadave/flatpak-epel7/




More information about the xdg-app mailing list