[systemd-devel] magically disappearing filesystems
grawity at gmail.com
Thu Jun 15 15:49:18 UTC 2017
This reminds me of a bug (misfeature) that *I think* was fixed in recent
Mount points are bound to the corresponding .device so that they'd get
cleaned up when the device disappears, but the way it was implemented meant
they'd also get immediately unmounted if the device wasn't there in the
E.g. in emergency mode when udev wasn't running so systemd thought all
devices were inactive. (State-based when it ought to have been event-based.)
I haven't tested specifically, but according to commit log, that shouldn't
be a problem in 23x versions.
On Wed, Jun 14, 2017, 16:59 Andre Maasikas <amaasikas at gmail.com> wrote:
> Having done this numerous times over the years I proceeded to move our
> data to new storage array, first time on a new os version though.
> Goes always like this,
> * attach array, create LUNS, create multipath conf etc...
> * umount old filesystem, mount new filesystems, mount old filesys to temp
> location, copy data over, * update fstab, unmount temporary/old filesystem.
> A day later ... Now to cleanly remove the old array/devices I did
> # multipath -f olddev
> # echo 1 > /sys/block/sdx/device/delete
> # echo 1 > /sys/block/sdy/device/delete
> After double checking I see that none of the new filesystems are mounted !
> OOPS moment. I estimate I have about 10 minutes now to figure this out
> before the transaction logs fill up and lots of revenue goes down. Probably
> doesn't look good for me as I discovered the issue after i executed the
> removal on most production systems and all clustered nodes
> OK, let's mount manually,
> mount /dev/mapper/newdev /mountpoint
> > no errors, seems ok
> still df, mount and /proc/mounts show nothing...
> WTF moment
> mount -v tells me smth like :
> /dev/newdev does not contain SELinux labels.
> You just mounted an file system that supports labels which does not
> contain labels, onto an SELinux box. It is likely that confined
> applications will generate AVC messages and not be allowed access to
> this file system. For more details see restorecon(8) and mount(8).
> Searching google for 3 minutes if I may be confined to a space where I am
> not allowed to see the filesystem anymore proved futile.
> dmesg is filled with messages about how the filesystem is busy for umount
> and for the mount attempt:
> kernel: XFS (dm-22): Mounting V4 Filesystem
> kernel: XFS (dm-22): Ending clean mount
> systemd: Unit xx.mount is bound to inactive unit dev-mapper-xx.device.
> Stopping, too.
> systemd: Unmounting /mountpoint...
> kernel: XFS (dm-22): Unmounting Filesystem
> systemd: Unmounted /mountpoint.
> systemd: Unit xxx.mount entered failed state.
> (dev-mapper-xx being the old/removed device-mapper device)
> Finally using the second set of keywords reveals that I'm supposed to
> #systemctl daemon-reload
> whenever i edit fstab.
> Seems like the fstab file is not always authoritative anymore and the
> authoritative configuration
> is kept god-knows elsewhere and these might not be in sync and depend on
> god-knows what and if you don't know that it's now allowed to automatically
> unmount a perfectly good working filesystem from under you without any
> warning. A quick review of fstab header, man fstab, man mount etc. does
> not reveal any information about this newish behavior. Also no commands
> that got me to this point gave any errors or indication that this will be
> It might be something else I did incorrectly or distribution specific
> (RHEL 7.2) or a bug already fixed. Most certainly I have not learned enough
> of the the new ways of systemd (and selinux)
> systemd-devel mailing list
> systemd-devel at lists.freedesktop.org
Mantas Mikulėnas <grawity at gmail.com>
Sent from my phone
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the systemd-devel