[systemd-devel] Antw: Re: Unexplainable unit restart ("Start request repeated too quickly")
Michael Chapman
mike at very.puzzling.org
Mon Jun 3 11:14:21 UTC 2019
On Mon, 3 Jun 2019, Ulrich Windl wrote:
> >>> Michael Chapman <mike at very.puzzling.org> schrieb am 03.06.2019 um 11:39
> in
> Nachricht <alpine.LFD.2.21.1906031935080.3180 at beren.home>:
> > On Mon, 3 Jun 2019, Ulrich Windl wrote:
> > [...]
> >> Hi!
> >>
> >> The generator unit is:
> >> [Unit]
> >> Description=I/O performance monitor instance generator
> >> Documentation=man:iotwatch‑generator(8) man:iotwatch at .service(8)
> >> Wants=nss‑user‑lookup.target time‑sync.target paths.target
> >> After=nss‑user‑lookup.target time‑sync.target paths.target
> >> ConditionPathExists=/etc/iotwatch.conf
> >> Conflicts=shutdown.target
> >>
> >> [Service]
> >> Type=oneshot
> >> ExecStart=/usr/lib/iotwatch/iotwatch‑generator /run/systemd/system
> >> TimeoutStartSec=10
> >> RestartPreventExitStatus=2 3 4 5
> >> StartLimitBurst=1
> >>
> >> [Install]
> >> WantedBy=default.target iotwatch.target
> >
> > That looks fine, though it _might_ make sense for it to have
> > RemainAfterExit= turned on. After all, if default.target or
> > iotwatch.target get restarted for any reason, then this unit will be
> > started again.
>
> That's a valuable hint: I thought systemd would remember that once started
> with success, the service is considered started until stopped manually.
> So does RemainAFterExit created a kind of dummy process that just remembers
> the state? The manual is not clear when you would need it:
>
> RemainAfterExit=
> Takes a boolean value that specifies whether the service shall be
> considered active even when all its processes exited. Defaults to
> no.
A oneshot service normally becomes inactive as soon as the command
terminates. Once inactive it is available to be activated again.
You would use RemainAfterExit=true if you wanted the service to remain in
an active state even once the command has terminated. While in this
"active" state it can't be "activated" -- after all, it's _already_
active.
Some kind of "stop" action would be needed to transition the service back
to its inactive state. This could be explicit (e.g. from systemctl) or
implicit (via dependencies).
> >
> > It's very weird to have what appears to be a generator done as a service
> > though. Any idea why that might be the case?
>
> Yes: systemd generators are called so early that they are not useful in my
> case (I cannot have dependencies for the generator). So I have a (little bit
> clever) generator service that creates or updates service files "when needed".
>
> >
> >> The iotwatch.target is:
> >> [Unit]
> >> Description=iotwatch I/O performance monitor
> >> Documentation=man:iotwatch at .service(8) man:iotwatch‑generator(8)
> >> After=nss‑lookup.target time‑sync.target paths.target
> >> Wants=iotwatch at NFS1.service iotwatch at NFS2.service iotwatch at LOC1.service
> >>
> >> [Install]
> >> WantedBy=default.target
> >>
> >> and the instance services look like:
> >> # automatically generated by /usr/lib/iotwatch/iotwatch‑generator
> >>
> >> [Unit]
> >> Description=iotwatch I/O performance monitor instance "LOC1"
> >> Documentation=man:iotwatch(1) man:iotwatch at .service(8)
> >> SourcePath=/etc/iotwatch.conf
> >> PartOf=iotwatch.target
> >
> > That also seems to imply that starting and stopping iotwatch.target would
> > be something that happens with some regularity.
>
> After a configuration change (when the generator actually updated the service
> files).
OK, so since iotwatch-generator.service is WantedBy=iotwatch.target, that
means if iotwatch.target is started or restarted, for any reason,
then iotwatch-generator.service will also be activated.
>
> >
> >> Requires=iotwatch‑generator.service
> >> Wants=nss‑user‑lookup.target time‑sync.target paths.target
> >> After=iotwatch‑generator.service
> >> After=nss‑user‑lookup.target time‑sync.target paths.target
> >> ConditionPathExists=/etc/passwd
> >> Conflicts=shutdown.target
> >>
> >> [Service]
> >> Type=forking
> >> RuntimeDirectory=iotwatch‑LOC1
> >> WorkingDirectory=/var/run/iotwatch‑LOC1
> >> ExecStartPre=/bin/sh ‑c '...'
> >> ExecStart=@/var/run/iotwatch‑LOC1/iotwatch‑LOC1 iotwatch‑LOC1 ‑l ...
> >> /etc/passwd
> >> ExecStartPost=/usr/bin/sleep 0.2
> >> TimeoutStartSec=10
> >> ExecStop=/var/run/iotwatch‑LOC1/iotwatch‑LOC1 ‑l ...
> >> #SyslogIdentifier=%p‑LOC1
> >> TimeoutStopSec=30
> >> PIDFile=/var/run/iotwatch‑LOC1/iotwatch‑LOC1.pid
> >> Restart=always
> >> RestartSec=10s
> >> RestartPreventExitStatus=1
> >> StartLimitBurst=1
> >>
> >> [Install]
> >> WantedBy=iotwatch.target
> >>
> >> >
> >> > It might also be good to know the RequiredBy, WantedBy, TriggeredBy,
> >> > RequisiteOf and PartOf properties of this iotwatch‑generator.service (see
>
> >> > `systemctl show iotwatch‑generator.service`), since they're possible ways
>
> >> > in which the service may be implicitly started or restarted.
> >>
> >> Yes, but I'm missing a log message that explains what happened.
> >
> > Sure, there isn't one. That's why I'm asking about the properties.
>
> Thanks for your insights!
>
> Regards,
> Ulrich Windl
Um, OK. I don't think we're any closer to solving your problem though. :-)
My hunch is that you've got something restarting iotwatch.target a lot. If
iotwatch-generator.service runs quickly, then every time iotwatch.target
is restarted iotwatch-generator.service will run. You could hit its start
limit that way.
More information about the systemd-devel
mailing list