[systemd-devel] Possible bug when a dummy service declares After= and/or Conflicts= a .mount unit?
didrocks at ubuntu.com
Wed Mar 4 06:25:05 PST 2015
Le 04/03/2015 13:40, Lennart Poettering a écrit :
> On Wed, 04.03.15 13:19, Didier Roche (didrocks at ubuntu.com) wrote:
>> Le 04/03/2015 12:49, Lennart Poettering a écrit :
>>> On Wed, 04.03.15 09:21, Didier Roche (didrocks at ubuntu.com) wrote:
>>>> It seems that we discovered an issue if a service declares some relationship
>>>> with a .mount unit.
>>>> For instance, having tmp.mount disable (and nothing mounting /tmp as tmpfs
>>>> in fstab):
>>> tmp.mount is enabled statically via the
>>> /usr/lib/systemd/system/local-fs.target.wants/tmp.mount symlink.
>> We have a distro-patched in debian to remove this symlink. Note that
>> otherwise, it wouldn't be started only on some boots and not on others,
>> which shows that there is an erratic behavior.
> Well, it's affected by jobs later queued in. You are using "Conflicts"
> against the unit, apparently. Now, conflicts has different effects for
> later queued jobs. depending on the "mode" setting the conflicting job
> is either removed, or the unit stopped or the job fails.
>>> Use "systemctl show tmp.mount" to see by which dependencies it was
>>> pulled in.
>> Nice hint! So, on boots where tmp.mount is enabled, here is what systemctl
>> show on the unit gives:
> What is this ConflictedBy actually about? Why?
> Ihave the suspicion you assume conflicts= has different behaviour that
> it actually means.
Ok, giving a little bit more context. So, we don't enable /tmp on tmpfs
in debian/ubuntu (the symlink is removed as a distro patch). We had with
sysvinit and upstart a failsafe mechanism if / is nearly full:
in that case /tmp is mounted as tmpfs and marked as "overflow" (some
scripts in ubuntu looks for that name and warn the user), trying to boot
the system as far as we can. The idea was to recreate this functionality
under systemd (bug:
* I was first proposing a generator for that:
which would enable unconditionally (even if manually enabled by the
sysadmin) the tmp.mount unit and add the "overflow" tag.
* Martin (see bug comments) thinks that the check is too early (on the
ro filesystem as our root filesystem is mounted as read only) and maybe
there is a fstab mount for /tmp with more spaces. There is also the
issue that we are maybe on permanent read only images (like with snappy)
and so, no free space, (and then /tmp would be mounted as tmpfs via
enabling tmp.mount, but we don't want to mark is as "overflow").
* For those reasons, we tried to rather go with a service started later
at boot time doing that check. I tried to do a quick one:
http://paste.ubuntu.com/10523961/, and that's where we started to see
tmp.mount being activated at some boots, and not on others (which
triggered that email).
Any suggestion on how we should tackle this? (I don't really like the
additional service approach and way more prefer the first declarative
>> Before=systemd-timesyncd.service foo.service local-fs.target umount.target
>> systemd-timesyncd.service though is condition failed:
>> Condition: start condition failed at Wed 2015-03-04 13:09:09 CET; 3min 10s
>> ConditionVirtualization=no was not met
>> So, even if the condition for an unit failed, the Requires= are
> Yes. ConditionXYZ= only shortcuts the executon of the service, all its
> deps are pulled in. The condition is checked at the time the unit is
> about to be started, which means that at that time the dependencies
> usually are fulfilled anyway already.
> Also see docs about this in the man page.
>> noted that on boot where the tmpfs isn't mounted, systemd-timesyncd.service
>> stays inactive:
>> Active: inactive (dead)
>> and if I try to start it (and we get the condition fail), the Requires
>> (tmp.mount in that case) is started.
>> It seems there are 2 issues:
>> - systemd-timesyncd.service is seldomly activated on boot on those machines.
>> (I'll dive into why)
>> - if an unit as a Condition failing, the Requirements of those units are
>> still activated.
> Yes, absolutely, see man pages.
Ok, that makes sense (still need to look at why we are getting in those
qemu instances systemd-timesyncd started at boot some times, not on others).
More information about the systemd-devel