This reminds me of a bug (misfeature) that *I think* was fixed in recent releases...<div><span style="font-size:13px"><br></span></div><div><span style="font-size:13px">Mount points are bound to the corresponding .device so that they'd get cleaned up when the device disappears, but the way it was implemented meant they'd also get immediately unmounted if the device wasn't there in the first place.</span><div dir="auto"><div dir="auto"><br></div><div dir="auto">E.g. in emergency mode when udev wasn't running so systemd thought all devices were inactive. (State-based when it ought to have been event-based.)<div dir="auto"><br></div><div dir="auto">I haven't tested specifically, but according to commit log, that shouldn't be a problem in 23x versions.<br><div dir="auto"><br><div class="gmail_quote"><div dir="ltr">On Wed, Jun 14, 2017, 16:59 Andre Maasikas <<a href="mailto:amaasikas@gmail.com">amaasikas@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div><div><div>Hi,<br><br>Having done
this numerous times over the years I proceeded to move our data to new
storage array, first time on a new os version though.<br></div>Goes always like this, <br></div>* attach array, create LUNS, create multipath conf etc...<br></div>*
umount old filesystem, mount new filesystems, mount old filesys to temp
location, copy data over, * update fstab, unmount temporary/old
filesystem. <br>DONE<br></div>A day later ... Now to cleanly remove the old array/devices I did<br># multipath -f olddev<br># echo 1 > /sys/block/sdx/device/delete<br># echo 1 > /sys/block/sdy/device/delete<br></div>etc..<br><br></div>After double checking I see that none of the new filesystems are mounted !<br></div>OOPS moment. I estimate I have about 10 minutes now to figure this out before the transaction logs fill up and lots of revenue goes down. Probably doesn't look good for me as I discovered the issue after i executed the removal on most production systems and all clustered nodes</div></div><br>OK, let's mount manually,<br></div><br>mount /dev/mapper/newdev /mountpoint<br></div>> no errors, seems ok<br></div>still df, mount and /proc/mounts show nothing...<br><div>WTF moment<br><br></div><div>mount -v tells me smth like :<br> /dev/newdev does not contain SELinux labels.<br> You just mounted an file system that supports labels which does not<br> contain labels, onto an SELinux box. It is likely that confined<br> applications will generate AVC messages and not be allowed access to<br> this file system. For more details see restorecon(8) and mount(8).<br><br></div><div>Searching
google for 3 minutes if I may be confined to a space where I am not
allowed to see the filesystem anymore proved futile.<br><br></div><div>dmesg is filled with messages about how the filesystem is busy for umount and for the mount attempt:<br>kernel: XFS (dm-22): Mounting V4 Filesystem<br>kernel: XFS (dm-22): Ending clean mount<br>systemd: Unit xx.mount is bound to inactive unit dev-mapper-xx.device. Stopping, too.<br>systemd: Unmounting /mountpoint...<br>kernel: XFS (dm-22): Unmounting Filesystem<br>systemd: Unmounted /mountpoint.<br>systemd: Unit xxx.mount entered failed state.<br><br>(dev-mapper-xx being the old/removed device-mapper device)<br><br></div><div>Finally using the second set of keywords reveals that I'm supposed to <br>#systemctl daemon-reload<br></div><div>whenever i edit fstab.<br><br></div><div>Seems like the fstab file is not always authoritative anymore and the authoritative configuration<br></div>is
kept god-knows elsewhere and these might not be in sync and depend on god-knows what and if you don't know that it's now allowed to
automatically unmount a perfectly good working filesystem from under
you without any warning. A quick review of fstab header, man fstab, man mount etc. does not
reveal any information about this newish behavior. Also no commands that
got me to this point gave any errors or indication that this will be
happening. <br><br>It might be something else I did incorrectly or
distribution specific (RHEL 7.2) or a bug already fixed. Most certainly I
have not learned enough of the the new ways of systemd (and selinux)<br><br></div>Andre<br></div>
_______________________________________________<br>
systemd-devel mailing list<br>
<a href="mailto:systemd-devel@lists.freedesktop.org" target="_blank">systemd-devel@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/systemd-devel" rel="noreferrer" target="_blank">https://lists.freedesktop.org/mailman/listinfo/systemd-devel</a><br>
</blockquote></div></div></div></div></div></div><div dir="ltr">-- <br></div><div data-smartmail="gmail_signature"><p dir="ltr">Mantas Mikulėnas <<a href="mailto:grawity@gmail.com">grawity@gmail.com</a>><br>
Sent from my phone</p>
</div>