[systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

jon jon at jonshouse.co.uk
Mon Jun 29 09:50:36 PDT 2015


On Mon, 2015-06-29 at 17:54 +0200, Reindl Harald wrote:
> Am 29.06.2015 um 17:01 schrieb jon:
> > On Mon, 2015-06-29 at 14:21 +0000, Jóhann B. Guðmundsson wrote:
> >>
> >> On 06/29/2015 02:08 PM, jon wrote:
> >>> https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system
> >>>
> >>> I just installed debian 8.1, on the whole my reaction is mixed, one
> >>> thing however really pisses me off more than any other
> >>>
> >>> "5.6.1. Stricter handling of failing mounts during boot under systemd"
> >>>
> >>> This is not "Stricter" it is a change in default behaviour.
> >>>
> >>> This change is a shit idea, who do I shout at to get the behaviour
> >>> modified to back to sensible ?
> >>>
> >>
> >> The systemd community only recommends what downstream consumers of it
> >> should do but does not dictate or othewise decided anything how those
> >> consumers eventually decide to implement systemd so if you dont like how
> >> systemd is implemented in Debian you should voice your concerns with the
> >> Debian community.
> > Ok
> >
> > Who writes/maintains the code that parses "nofail" in /etc/fstab ?
> > Who writes/maintains the typical system boot code (whatever has replaced
> > rc.sysinit) ?
> >
> > I suspect the answer to both is the systemd maintainers, in which case
> > is this not the correct place to bitch about it?
> 
> i don't get what is there to "bitch about" at all
> 
> you have a mountpoint in /etc/fstab and don't care if it don't get 
> mounted at boot and instead data get written into the folder instead the 
> correct filesystem?
You are making assumptions !

Not everyone uses linux in the same way. I have a number of servers that
are RAID, but others I maintain have backup disks instead.

The logic with backup disks is that the volumes are formatted and
installed, then a backup is taken. The backup disk is then removed from
the server and replaced only for backups.

This machine for example is a gen8 HP microserver, it has 4 removable
(non hotswap) disks.
/etc/fstab
LABEL=volpr     /disks/volpr            ext4            defaults        0       0
LABEL=volprbak  /disks/volprbak         ext4            defaults 	0       0
LABEL=volpo     /disks/volpo            ext4            defaults        0       0
LABEL=volpobak  /disks/volpobak         ext4            defaults	0       0

At install it looks like this, but after the machine is populated the
two "bak" volumes are removed. I want (and expect) them to be mounted
again when replaced, but they spend most of the month in a desk draw.

It is a perfectly valid way of working, does not causes disruption like
making and breaking mirror pairs - and most importantly has been the way
I have worked for 10 plus years !

I have also built numerous embedded devices that have speculative fstab
entries for volumes that are only present sometimes, In the factory for
example.


> normally that is not what somebody expects and if that is the desired 
> behavior for you just say that to your operating system and add "nofail"
This was, and most importantly IS, the behaviour I expect.

To get this in perspective I am not complaining about the idea of not
going into admin if an FS is missing, it just should not BREAK the
previous behaviour by being the default.  
The default for the past N years has been to continue to boot if the FS
is missing, that should STILL be the default.  

The flag "nofail" logic is the wrong way up.  An option should have been
added to indicate that the presence of a given FS is expected and
failure to mount should crash out to admin, the install tools should
then have been modified to add this option to fstab rather than changing
the default behaviour.

This would then not break machines as they are updated to include (grits
teeth) systemd.

Thanks,
Jon




More information about the systemd-devel mailing list