[Mesa-dev] [PATCH v2 0/5] Use GitLab CI to build Mesa
Clayton Craft
clayton.a.craft at intel.com
Tue Sep 4 21:43:18 UTC 2018
Quoting Juan A. Suarez Romero (2018-08-29 03:12:33)
> Hello.
>
> This is a first part, version 2, of a more complete proposal to use GitLab CI to
> build and test Mesa. This first part just adds the required pieces to build
> Mesa, using the different supported tools (meson, autotools, and scons).
>
> A second part, to be sent in the future, will use the results of the former to
> run different tests and verify everything works fine.
>
> An example of the pipeline that will result from this patchset can be seen at
> https://gitlab.freedesktop.org/jasuarez/mesa/pipelines/3070.
>
> I hope I can explain here all the rationale behind this proposal. Any question
> is welcomed.
>
>
> In this V2 series, we have:
>
> - Addressed all comments in V1, specifically:
>
> - Reviewed the dependencies installed in the base images
> - libdrm is now installed in the base image using Meson
> - Some minor changes is done in the base image system to speed up installation
> - Fixed a small error in the inspect-images script
>
> - Added support for scheduled builds, so base images can be rebuilt every now
> and then to obtain any fix/update in the dependencies.
>
> - Have different base/llvm images per branch. So far we only have a single
> base/llvm image, that was used for all Mesa builds. But his can create
> conflicts if for instance staging/XX.Y has different dependencies than master:
> the base images should be different. To avoid this, each branch uses its own
> base/llvm images. This is not a big problem from space POV in the registry, as
> if the images are the same for two branches in the registry it will share the
> same Docker layers.
>
>
> ## Brief introduction
>
> This proposal is based on a setup we are using at Igalia for the different
> releases we were in charge of. The idea is to build different Docker images
> containing Mesa with different setups; thus we are build an image using Meson,
> another using Autotools, and so on. Even more, as the Gallium drivers depend on
> the LLVM version, we also use different setups with different LLVM versions.
>
> Some of those images are later used to run different test suites in different
> hardware. So we just save those Docker images for further pulling in a next step
> in the pipeline. For those that we are not using for testing, we plainly discard
> them. At least, they helped to verify that the setup builds correctly.
>
> A very complete example of this setup pipeline can be seen in
> https://gitlab.com/igalia/graphics/mesa/pipelines/26539728
>
> As said before, right now we are just focusing on building the images, rather
> than testing them. In a second part to be sent we will focus on the testing.
>
>
> ## Why Docker
>
> When we started the initial setup, we have something clear: just build
> everything once, and tests as in many devices as possible. The reasons are
> different: we have powerful machines for building, but then for testing we have
> different hosts with different power skills; some low-end NUCS, some with few
> memory, and so on. And also, we wanted to ensure that the discrepancies when
> testing were not due the different software configuration (different distros,
> different versions, different installed packages, ...): use the same software as
> much as possible. And then we have another requirement: have an easy way to
> distribute the result in the testing machines.
>
> And Containers is something that really helps to achive this. And inside
> Containers, Docker is a de-facto, so we started to use Docker. Wich Docker, we
> can build self-contained images with everything we need to run Mesa, and
> distribute it very easily. And the fact that it allows to use layers to build
> the images, makes quite easy to reuse previous results in the future builds.
>
>
> ## Why Rocker
>
> As said before, Docker is a great tool, but it has a couple of drawbacks when
> building the images.
>
> On one side, the Dockerfile syntax is quite rigid, and does not allow for
> complex configuration. For instance, it's quite difficult to create a Dockerfile
> where you can allow to use Meson or Autotools to build Mesa, specified as
> parameter when building the image. Either you create an unreadable Dockerfile,
> or ends up creating different Dockerfiles, even when most of them shares a lot
> of common code. And if you start to add more options, like LLVM version, it
> becomes crazy.
>
> One way to solve this problem is to create a Dockerfile with templates, like
> Jinja or any similar tool, and then generate the final Dockerfile with the
> proper options.
Using docker-compose, you could create a dockerfile that uses an env variable
for specifying the build system, then one (or more) 'services' in the compose
file that run the same dockerfile with the env. var set to your desired build
system. You'd need to add some logic to handle the env var could be a script
installed into the container that calls meson or autotools.
Something like this might allow you to re-use the dockerfile without any
further modifications to support one or more build systems.
>
> The other problem with Docker, more complex to build, is how to allow to
> cache. We are using ccache to speed up the build time, but the problem is that
> Docker does not allow to mount external directories when building an image. And
Docker-compose allows bind mounts in containers:
https://docs.docker.com/compose/compose-file/#volumes
> thus, we can't reuse the cache generated in previous builds. There are some
> proposals like creating different images, or building the image using `docker
> run` (which allows to mount those directories) plus `docker commit` to generate
> the final image. But all of them leads to Dockerfiles quite difficult to
> maintain. So solution is normally just not use cache, with the impact this has.
>
> In our case, we solved both problems using Rocker instead of Docker. Rocker is
> like a `docker build` with steroids, that fixes the above problems. It is a tool
> that is able to create Docker images based on Rockerfiles, that are like
> Dockerfiles but with enriched syntax, that provides the same power as a
> template. And, which is more important, allows to mount external directories
> when building the image; we use this feature to mount the ccache results
> directory.
>
> Unfortunately, Rocker is a tool that is not maintained anymore
> (https://github.com/grammarly/rocker). We still use it because it provides
> everything we need, and fortunately we don't need any more features.
>
> Maybe there are other alternatives out there, but we were happy with this and
> hence our proposal. If we want to use Docker rather than Rocker, then we could
> use template tools like Jinja, and forget about caching during build, which will
> impact on the build time.
>
>
> ## Involved stages
>
> The dependencies required to build Mesa doesn't change very frequently, so
> building them everytime is a waste of time. As Docker allows to create images
> based on the content of other images, we have defined the setup in several
> stages.
>
> On a first stage a "base" image is built. This image contains almost all the
> dependencies required to build Mesa. Worth to mention that libdrm is excluded
> here, as this is a dependency that really changes quite frequently, so we
> postpone the installation for further stages.
>
> One we have the "base" image, we create different images with the different LLVM
> compilers. This ensure that when using a specific image we only have that LLVM
> version, and not any other.
>
> An important point here is that, these builts appears in the pipeline, they are
> not actually built if not required. That is, in the case of the base image, if
> the Rockerfile used to create the image has changed with respect to the one used
> to create the image that is already in the registry, then the image is rebuilt
> (as this means something changed, very likely some dependency). But if the
> Rockerfile didn't change, then there is no need to rebuild the image, and just
> keep using the one already in the registry. This is also done for the LLVM
> images. This helps to improve the speed, as most of the times they don't need to
> be built again.
>
> The third stage is the one that builds Mesa itself. Here we just define which
> tool to use and which LLVM version. This is done by passing the right parameters
> to the `rocker build` tool. It will pick the right base image, install the
> missing dependencies (mainly, libdrm), select which drivers should be built
> (based on the LLVM version and parsing the configure.ac file), and create the
> image.
>
> As LLVM version only affects the Gallium drivers, and in order to reduce the
> build time, we added a fake tool "gallium" that just builds only the Gallium
> drivers based on the LLVM version.
>
> Another worth to mention point is the `distcheck` tool. As it names suggests,
> this run `make distcheck` to build Mesa, which basically creates a dist tarball,
> uncompress it, and builds it with autotools. The bad part is that it only builds
> the dist tarball with autotools, but it does not checks with the other tools
> (scons and meson). To fix this, this `distcheck` job generates as an artifact
> the dist tarball, which is used by the "Tarball" stage in the pipeline (4th
> stage) to uncompress and builds using those tools.
>
> Finally, at this moment we only save one of the images created, the one
> corresponding to the `autotools` jobs, as this has all the drivers built, and
> can be downloaded and used for testing. The other ones are just used to verify
> that everything is built correctly, but discarded afterwards.
>
>
> CC: Andres Gomez <agomez at igalia.com>
> CC: Daniel Stone <daniels at collabora.com>
> CC: Dylan Baker <dylan.c.baker at intel.com>
> CC: Emil Velikov <emil.velikov at collabora.com>
> CC: Eric Anholt <eric at anholt.net>
> CC: Eric Engestrom <eric.engestrom at intel.com>
>
>
> Juan A. Suarez Romero (5):
> gitlab-ci: build Mesa using GitLab CI
> gitlab-ci: build base images if dependencies changed
> gitlab-ci: Build from the released tarballs
> gitlab-ci: update base + llvm images with scheduled pipelines
> gitlab-ci: use branch/tag in base image names
>
> .gitlab-ci.yml | 210 ++++++++++++++++++++++++++++++
> gitlab-ci/Rockerfile.base | 189 +++++++++++++++++++++++++++
> gitlab-ci/Rockerfile.llvm | 62 +++++++++
> gitlab-ci/Rockerfile.mesa | 133 +++++++++++++++++++
> gitlab-ci/inspect-remote-image.sh | 21 +++
> 5 files changed, 615 insertions(+)
> create mode 100644 .gitlab-ci.yml
> create mode 100644 gitlab-ci/Rockerfile.base
> create mode 100644 gitlab-ci/Rockerfile.llvm
> create mode 100644 gitlab-ci/Rockerfile.mesa
> create mode 100755 gitlab-ci/inspect-remote-image.sh
>
> --
> 2.17.1
>
> _______________________________________________
> mesa-dev mailing list
> mesa-dev at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: signature
URL: <https://lists.freedesktop.org/archives/mesa-dev/attachments/20180904/c6a4b96e/attachment-0001.sig>
More information about the mesa-dev
mailing list