[systemd-devel] systemctl status not showing still running processes in inactive .mount unit cgroups (NFS specifically)

Steve Dickson SteveD at redhat.com
Mon Jan 12 12:54:05 PST 2015


On 01/12/2015 05:34 AM, Colin Guthrie wrote:
> Hi,
> Looking into a thoroughly broken nfs-utils package here I noticed a
> quirk in systemctl status and in umount behaviour.
> In latest nfs-utils there is a helper binary shipped upstream called
> /usr/sbin/start-statd (I'll send a separate mail talking about this
> infrastructure with subject: "Running system services required for
> certain filesystems")
> It sets the PATH to "/sbin:/usr/sbin" then tries to run systemctl
> (something that is already broken here as systemctl is in bin, not sbin)
> to start "statd.service" (again this seems to be broken as the unit
> appears to be called nfs-statd.service upstream... go figure).
The PATH problem has been fixed in the latest nfs-utils.  

> Either way we call the service nfs-lock.service here (for legacy reasons).
With the latest nfs-utils rpc-statd.service is now called from start-statd
But yes, I did symbolically nfs-lock.service to rpc-statd.service when 
I moved to the upstream systemd scripts.

> If this command fails (which it does for us for two reasons) it runs
> "rpc.statd --no-notify" directly. This binary then run in the context of
> the .mount unit and thus in the .mount cgroup.
What are the two reason rpc.statd --no-notify fails? 

> That seems to work OK (from a practical perspective things worked OK and
> I got my mount) but are obviously sub optimal, especially when the mount
> point is unmounted.
> In my case, I called umount but the rpc.statd process was still running:
What is the expectation? When the umount should bring down rpc.statd?

> [root at jimmy nfs-utils]$ pscg | grep 3256
>  3256 rpcuser
> 4:devices:/system.slice/mnt-media-scratch.mount,1:name=systemd:/system.slice/mnt-media-scratch.mount
> rpc.statd --no-notify
> [root at jimmy nfs-utils]$ systemctl status mnt-media-scratch.mount
> ‚óŹ mnt-media-scratch.mount - /mnt/media/scratch
>    Loaded: loaded (/etc/fstab)
>    Active: inactive (dead) since Mon 2015-01-12 09:58:52 GMT; 1min 12s ago
>     Where: /mnt/media/scratch
>      What: marley.rasta.guthr.ie:/mnt/media/scratch
>      Docs: man:fstab(5)
>            man:systemd-fstab-generator(8)
> Jan 07 14:55:13 jimmy mount[3216]: /usr/sbin/start-statd: line 8:
> systemctl: command not found
> Jan 07 14:55:14 jimmy rpc.statd[3256]: Version 1.3.0 starting
> Jan 07 14:55:14 jimmy rpc.statd[3256]: Flags: TI-RPC
> [root at jimmy nfs-utils]$
Again this is fixed with the latest nfs-utils...

Question? Why are you using v3 mounts? With V4 all this goes away.

> As you can see the mount is dead but the process is still running and
> the systemctl status output does not correctly show the status of
> binaries running in the cgroup. When the mount is active the process
> does actually exist in this unit's context (provided systemd is used to
> do the mount - if you call "mount /path" command separately, the
> rpc.statd process can end up in weird cgroups - such as your user session!)
> Anyway, assuming the process is in the .mount unit cgroup, should
> systemd detect the umount and kill the processes accordingly, and if
> not, should calling "systemctl status" on .mount units show processes
> even if it's in an inactive state?
> This is with 217 with a few cherry picks on top so might have been
> addressed by now.
> Cheers
> Col

More information about the systemd-devel mailing list