[Mesa-dev] [PATCH v2 1/5] gitlab-ci: build Mesa using GitLab CI
Emil Velikov
emil.l.velikov at gmail.com
Thu Aug 30 13:27:57 UTC 2018
Hi Juan,
Something I should have mentioned ealier:
Pardon if my question are a bit rough - my rocker/docker experience is limited.
On 30 August 2018 at 11:22, Juan A. Suarez Romero <jasuarez at igalia.com> wrote:
> On Wed, 2018-08-29 at 14:11 +0100, Emil Velikov wrote:
>> Hi Juan,
>>
>> I've shared a number of suggestions. I'll leave that to you if they
>> will be in v3 or patches on top.
>>
>> On 29 August 2018 at 11:12, Juan A. Suarez Romero <jasuarez at igalia.com> wrote:
>>
>> > In order to build the images, Rocker is used. This is a tool that
>> > extends the Dockerfiles with new features that are quite interested
>> > here. The main features we use is the support for templating, and the
>> > support for mounting external directories during the image building.
>> > This help to use tools like ccache to improve the build speed.
>> >
>>
>> I think that gitlab-ci supports templating - not sure about mounting
>> external directories.
>
>
> We need the templating in the Dockerfile, not in the gitlab-ci itself. Same for
> mounting external directories: we need mounting (not a real requirement, but
> speeds up the build step) inside the docker build process.
>
Simply put, one could see .gitlab-ci.xml as a form of dockerfile,
since the gitlab-ci uses Docker.
Former is capable of storing artefacts, although I'd imagine the extra
docker/rocker here is used solely to share and reuse the artefacts and
dependencies.
Am I in the right ballpark?
Asking all these questions since the double-docker does seem a bit strange.
>> > +before_script:
>> > + - mkdir -p ccache
>> > + - rm -fr ../ccache
>> > + - mv ccache ../
>>
>> nit: how does this look
>> rm -rf ../ccache
>> mkdir -p ../ccache
>>
>
> It wouldn't work.
>
Hmm something sounds badly broken.
I would imagine you (or perhaps me) misread some the .. in the commands.
>> Although why do we .. in the first place? Mind adding a comment?
>
>
> Yup. First of all, we cache the ccache directory in GitLab, so when re-building
> a Docker image we save time.
>
> GitLab only caches directories that are in the same path as the git clone/fetch
> is done. That is, it restores/saves directories that are in the same place as
> the full Mesa code.
>
> But we don't want to have the ccache there, because when building the image we
> add all the cloned content. And this would add inside the image the full ccache.
> We don't want this, but just mount it externally.
>
AKA keeping the docker images small, while sharing the ccache. Makes sense.
> Thus, we move the restored ccache to a different place at the beginning (the "mv
> ccache ../" in the before_script) and move back at the end so GitLab can store
> the new cache (the "mv ../ccache ./" in the after_script).
>
> As the very first time there is no ccache to restore, we ensure there's always a
> ccache directory (hence, the mkdir call). The "rmdir ../ccache" is to ensure
> there's nothing there before doing the move. I think this is a safe measure, as
> usually there shouldn't be nothing there.
>
I think that a cancelled job may leave a ../ccache around.
>>
>> I'm fairly sure we can drop anything older than 3.9 through the series.
>> Jose was pretty clear that they're aiming/moved to llvm 5.0
>>
>
> I'm fine with this. We test so many versions because those are the minimum
> required versions according to configure.ac.
>
> If we think it is not needed to check older versions, we can remove them. On the
> other hand, we could just keep them when testing release branches, and remove
> when testing master.
>
Good call. Thinking about it - keeping the LLVM bits in your patches
as-is and tweaking the version (in configure/travis/gitlab-ci/etc)
would be a better.
>
>>
>> > +RUN apt-get update \
>> > + && apt-get --no-install-recommends -y install autoconf automake gcc g++ libtool-bin \
>> > + pkg-config gettext ccache make scons bison flex sudo git wget bzip2 xz-utils \
>> > + libclc-dev libelf-dev libexpat1-dev libffi-dev libomxil-bellagio-dev \
>> > + libpciaccess-dev libx11-xcb-dev libxdamage-dev libxml2-dev libxrender-dev \
>> > + libxvmc-dev libunwind-dev zlib1g-dev python-pip python-setuptools python-wheel \
>> > + python3-pip python3-setuptools python3-wheel \
>> > + && rm -fr /var/lib/apt/lists/*
>>
>> Why do we need the rm after apt-get?
>>
>>
>
> This is suggested by Docker as ag ood pattern to install packages in Docker
> images. We update + install package + remove the files generated by the update
> everything in a single command, so we keep the Docker image size reduced to the
> minimum.
>
Fair enough. I was thinking about the duplicated "sync the local package DB".
Which is admittedly nothing comparing to keeping the images small.
>> > +USER local
>> > +
>>
>> A user with name "local" sounds a bit strange. Is that the recommendation?
>
> We just picked "local" as username, but we're fine to use a different name (like
> "mesa" or "user").
>
Personally I would use base-builder/llvm-builder/mesa-builder for the
respective files.
It tends to make things a bit more obvious ... but that's just me ;-)
> Usually in Docker everything is done as "root". But in order to reduce risks,
> and be more real (developers never build Mesa as root, or shouldn't at least!)
> we build everything as non-root.
>
Agreed, thank you for that.
>> Thinking out loud:
>> There should be a way to make this a trivial function and feed it the
>> static data.
>> ...
>> wget foo
>> tar filename(foo)
>> cd basename(foo)
>> ...
>>
>
> We couldn't figure out how to do it. Maybe there's a way to do loops inside the
> Rockerfiles. At this moment, this is very similar to what we do in .travis.
>
Agreed it not pretty, and we already do it. Perhaps we could flesh
some of this/travis bits into a shell script or two and reuse it?
Something to consider after this work has landed. Please do not bother
with this now.
>>
>> > --- /dev/null
>> > +++ b/gitlab-ci/Rockerfile.llvm
>> > +{{ if eq .LLVM "3.3" }}
>> > +RUN wget https://people.igalia.com/jasuarez/packages/llvm-3.3_3.3-+checkinstall1_amd64.deb \
>> > + && dpkg -i llvm-3.3_3.3-+checkinstall1_amd64.deb \
>> > + && rm llvm-3.3_3.3-+checkinstall1_amd64.deb
>> > +ENV LD_LIBRARY_PATH=/usr/lib/llvm-{{ .LLVM }}/lib:$LD_LIBRARY_PATH
>> > +
>>
>> With 3.3 gone, there's no need to build our own LLVM package ;-)
>
> According to configure.ac, the minimum required LLVM version for Gallium drivers
> is 3.3. This is why we also use LLVM 3.6 and 3.8: to test the Gallium drivers.
>
I seriously doubt anyone is using 3.3 - it was released in Jun 2013.
Would be great to just drop it for all branches, but we could start
with master, as you suggested earlier.
>>
>> > +{{ else }}
>> > +MOUNT .:/context
>> > +RUN ./autogen.sh \
>> > + && make distcheck \
>> > + && __version=`cat VERSION` \
>> > + && mkdir -p /context/release-output \
>> > + && mv /home/local/mesa-head.txt /context/release-output \
>> > + && mv mesa-$__version.tar.xz /context/release-output \
>> > + && sudo rm -fr /home/local/mesa
>> > +
>> > +{{ else if eq .BUILD "autotools" "gallium" }}
>> > +
>> > +RUN export LLVM={{ $llvm_version }}.0\
>> > + && eval `cat configure.ac | egrep ^LLVM_REQUIRED` \
>> > + && if dpkg --compare-versions $LLVM ge $LLVM_REQUIRED_GALLIUM ; then GALLIUM_DRIVERS=i915,etnaviv,freedreno,imx,nouveau,pl111,r300,svga,swrast,tegra,v3d,vc4,virgl ; fi \
>> > + && if dpkg --compare-versions $LLVM ge $LLVM_REQUIRED_R600 ; then GALLIUM_DRIVERS=$GALLIUM_DRIVERS,r600 ; fi \
>> > + && if dpkg --compare-versions $LLVM ge $LLVM_REQUIRED_RADEONSI ; then GALLIUM_DRIVERS=$GALLIUM_DRIVERS,radeonsi ; fi \
>> > + && if dpkg --compare-versions $LLVM ge $LLVM_REQUIRED_SWR ; then GALLIUM_DRIVERS=$GALLIUM_DRIVERS,swr ; fi \
>>
>> /me dreams of a day where only a single LLVM_REQUIRED will be available
>>
>> > + && DRI_DRIVERS=i915,i965,nouveau,r200,radeon,swrast \
>> > + && VULKAN_DRIVERS=intel \
>> > + && if dpkg --compare-versions $LLVM ge $LLVM_REQUIRED_RADV ; then VULKAN_DRIVERS=$VULKAN_DRIVERS,radeon ; fi \
>> > + && ./autogen.sh \
>> > + --with-gallium-drivers=$GALLIUM_DRIVERS \
>> > + {{ if eq .BUILD "gallium" }} --with-dri-drivers="" \
>> > + {{ else }} --with-dri-drivers=$DRI_DRIVERS \
>> > + --with-vulkan-drivers=$VULKAN_DRIVERS \
>> > + --with-platforms=x11,drm,wayland {{ end }} \
>>
>> Omitting a vulkan/dri/gallium drivers list will default to building
>> some of them :-\
>
> Right. We need to add '--with-vulkan-drivers=""' when BUILD is gallium, as we
> are not interested in the Vulkan driver.
>
>> Perhaps we could set the *DRIVERS variables conditionally, defaulting to "".
>
> We are using three variables, GALLIUM_DRIVERS, DRI_DRIVERS and VULKAN_DRIVERS
> that are updated depending on BUILD and also the LLVM version.
>
It seems to me that the following happens:
if .BUILD gallium
dri-drivers = empty list
gallium-drivers = some list
vulkan-drivers = some list
else
dri-drivers = some list
gallium-drivers = some list
vulkan-drivers = some list
AKA both gallium and vulkan drivers are built twice ... to some extend.
The current code assumes that LLVM_REQUIRED_GALLIUM - implies some
gallium drivers.
That's wrong - none of the ones listed need LLVM. If we want LLVM 3.3
testing we could stick with something short like Travis.
I'd suggest:
if .BUILD gallium - aka LLVM version fuzz; from no LLVM to latest
dri-drivers = empty list
gallium-drivers = some list -> details vary on LLVM version available
vulkan-drivers = empty list
else - with a LLVM version for radv
dri-drivers = full list
gallium-drivers = empty list
vulkan-drivers = full list
>>
>>
>> > + {{ if ne $llvm_version "0.0" }} --enable-llvm --enable-llvm-shared-libs {{ end }} \
>> > + {{ if ne $debug_build "false" }} --enable-debug {{ end }} \
>> > + --enable-glx-tls --enable-gbm --enable-egl \
>> > + && make \
>> > + && make check \
>> > + && sudo make install \
>> > + && sudo ldconfig \
>> > + && sudo rm -fr /home/local/mesa
>> > +
>> > +{{ else if eq .BUILD "meson" }}
>> > +
>> > +RUN meson _build \
>>
>> Worth adding the gallium/vulkan/dri drivers list here?
>>
>
> Didn't add the list because this depends on the LLVM version we are using. So we
> trust meson correctly picks all the drivers.
>
> If we want to be explicit, maybe we can reuse the list of
> DRI_/VULKAN_/GALLIUM_DRIVERS here.
>
Agreed. The same reasoning as f9d0e7d3bcd831d52af6a6c46aac4ed4590a8615
applies here.
If you agree feel free to keep that as a follow-up patch.
Thanks again for answering my noobish questions :-)
HTH
Emil
More information about the mesa-dev
mailing list