<div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div><div><div>Hi,<br><br>Having done
this numerous times over the years I proceeded to move our data to new
storage array, first time on a new os version though.<br></div>Goes always like this, <br></div>* attach array, create LUNS, create multipath conf etc...<br></div>*
umount old filesystem, mount new filesystems, mount old filesys to temp
location, copy data over, * update fstab, unmount temporary/old
filesystem. <br>DONE<br></div>A day later ... Now to cleanly remove the old array/devices I did<br># multipath -f olddev<br># echo 1 > /sys/block/sdx/device/delete<br># echo 1 > /sys/block/sdy/device/delete<br></div>etc..<br><br></div>After double checking I see that none of the new filesystems are mounted !<br></div>OOPS moment. I estimate I have about 10 minutes now to figure this out before the transaction logs fill up and lots of revenue goes down. Probably doesn't look good for me as I discovered the issue after i executed the removal on most production systems and all clustered nodes</div></div><br>OK, let's mount manually,<br></div><br>mount /dev/mapper/newdev /mountpoint<br></div>> no errors, seems ok<br></div>still df, mount and /proc/mounts show nothing...<br><div>WTF moment<br><br></div><div>mount -v tells me smth like :<br> /dev/newdev does not contain SELinux labels.<br> You just mounted an file system that supports labels which does not<br> contain labels, onto an SELinux box. It is likely that confined<br> applications will generate AVC messages and not be allowed access to<br> this file system. For more details see restorecon(8) and mount(8).<br><br></div><div>Searching
google for 3 minutes if I may be confined to a space where I am not
allowed to see the filesystem anymore proved futile.<br><br></div><div>dmesg is filled with messages about how the filesystem is busy for umount and for the mount attempt:<br>kernel: XFS (dm-22): Mounting V4 Filesystem<br>kernel: XFS (dm-22): Ending clean mount<br>systemd: Unit xx.mount is bound to inactive unit dev-mapper-xx.device. Stopping, too.<br>systemd: Unmounting /mountpoint...<br>kernel: XFS (dm-22): Unmounting Filesystem<br>systemd: Unmounted /mountpoint.<br>systemd: Unit xxx.mount entered failed state.<br><br>(dev-mapper-xx being the old/removed device-mapper device)<br><br></div><div>Finally using the second set of keywords reveals that I'm supposed to <br>#systemctl daemon-reload<br></div><div>whenever i edit fstab.<br><br></div><div>Seems like the fstab file is not always authoritative anymore and the authoritative configuration<br></div>is
kept god-knows elsewhere and these might not be in sync and depend on god-knows what and if you don't know that it's now allowed to
automatically unmount a perfectly good working filesystem from under
you without any warning. A quick review of fstab header, man fstab, man mount etc. does not
reveal any information about this newish behavior. Also no commands that
got me to this point gave any errors or indication that this will be
happening. <br><br>It might be something else I did incorrectly or
distribution specific (RHEL 7.2) or a bug already fixed. Most certainly I
have not learned enough of the the new ways of systemd (and selinux)<br><br></div>Andre<br></div>