[systemd-devel] magically disappearing filesystems

Andre Maasikas amaasikas at gmail.com
Wed Jun 14 13:58:24 UTC 2017


Hi,

Having done this numerous times over the years I proceeded to move our data
to new storage array, first time on a new os version though.
Goes always like this,
* attach array, create LUNS, create multipath conf etc...
* umount old filesystem, mount new filesystems, mount old filesys to temp
location, copy data over, * update fstab, unmount temporary/old filesystem.
DONE
A day later ... Now to cleanly remove the old array/devices I did
# multipath -f olddev
# echo 1 > /sys/block/sdx/device/delete
# echo 1 > /sys/block/sdy/device/delete
etc..

After double checking I see that none of the new filesystems are mounted !
OOPS moment. I estimate I have about 10 minutes now to figure this out
before the transaction logs fill up and lots of revenue goes down. Probably
doesn't look good for me as I discovered the issue after i executed the
removal on most production systems and all clustered nodes

OK, let's mount manually,

mount /dev/mapper/newdev /mountpoint
> no errors, seems ok
still df, mount and /proc/mounts show nothing...
WTF moment

mount -v tells me smth like :
 /dev/newdev does not contain SELinux labels.
       You just mounted an file system that supports labels which does not
       contain labels, onto an SELinux box. It is likely that confined
       applications will generate AVC messages and not be allowed access to
       this file system.  For more details see restorecon(8) and mount(8).

Searching google for 3 minutes if I may be confined to a space where I am
not allowed to see the filesystem anymore proved futile.

dmesg is filled with messages about how the filesystem is busy for umount
and for the mount attempt:
kernel: XFS (dm-22): Mounting V4 Filesystem
kernel: XFS (dm-22): Ending clean mount
systemd: Unit xx.mount is bound to inactive unit dev-mapper-xx.device.
Stopping, too.
systemd: Unmounting /mountpoint...
kernel: XFS (dm-22): Unmounting Filesystem
systemd: Unmounted /mountpoint.
systemd: Unit xxx.mount entered failed state.

(dev-mapper-xx being the old/removed device-mapper device)

Finally using the second set of keywords reveals that I'm supposed to
#systemctl daemon-reload
whenever i edit fstab.

Seems like the fstab file is not always authoritative anymore and the
authoritative configuration
is kept god-knows elsewhere and these might not be in sync and depend on
god-knows what and if you don't know that it's now allowed to automatically
unmount a perfectly good working filesystem from under you without any
warning.  A quick review of fstab header, man fstab, man mount etc. does
not reveal any information about this newish behavior. Also no commands
that got me to this point gave any errors or indication that this will be
happening.

It might be something else I did incorrectly or distribution specific (RHEL
7.2) or a bug already fixed. Most certainly I have not learned enough of
the the new ways of systemd (and selinux)

Andre
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/systemd-devel/attachments/20170614/b4606f48/attachment.html>


More information about the systemd-devel mailing list