[systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

jon jon at jonshouse.co.uk
Mon Jun 29 10:50:45 PDT 2015


> >> you have a mountpoint in /etc/fstab and don't care if it don't get
> >> mounted at boot and instead data get written into the folder instead the
> >> correct filesystem?
> > You are making assumptions !
> 
> no
> 
> > Not everyone uses linux in the same way. I have a number of servers that
> > are RAID, but others I maintain have backup disks instead.
> 
> and?
> 
> > The logic with backup disks is that the volumes are formatted and
> > installed, then a backup is taken. The backup disk is then removed from
> > the server and replaced only for backups.
> >
> > This machine for example is a gen8 HP microserver, it has 4 removable
> > (non hotswap) disks.
> > /etc/fstab
> > LABEL=volpr     /disks/volpr            ext4            defaults        0       0
> > LABEL=volprbak  /disks/volprbak         ext4            defaults 	0       0
> > LABEL=volpo     /disks/volpo            ext4            defaults        0       0
> > LABEL=volpobak  /disks/volpobak         ext4            defaults	0       0
> >
> > At install it looks like this, but after the machine is populated the
> > two "bak" volumes are removed. I want (and expect) them to be mounted
> > again when replaced, but they spend most of the month in a desk draw.
> >
> > It is a perfectly valid way of working, does not causes disruption like
> > making and breaking mirror pairs - and most importantly has been the way
> > I have worked for 10 plus years !
> >
> > I have also built numerous embedded devices that have speculative fstab
> > entries for volumes that are only present sometimes, In the factory for
> > example.
> 
> and why don't you just add "nofail"
> it's way shorter then write a ton of emails
Yes, this I have in fact done.  I have the right to complain though ! 

Also I get pretty fed up with changes to linux.  It gets harder and
harder to maintain from the command line as defaults are constantly
changed and new (sometimes ill thought out) user space tools are added.
PCs may ever grow in memory, but as I age the reverse is true for me.

By the time I next install a machine I will have forgotten that I need
to add an extra fstab option, I will re-boot, and it will bite me all
over again... just because someone thought it was not important to
preserve a more useful behaviour for me .... 

> >> normally that is not what somebody expects and if that is the desired
> >> behavior for you just say that to your operating system and add "nofail"
> > This was, and most importantly IS, the behaviour I expect.
> 
> what about change yor expectations?
Why ! Two can play that game.........

Unless you are going to have sshd come up before the admin shell then
this simple change is annoying to many people I suspect....

None of my low end machines have true remote admin, so if I add an entry
to fsab that is wrong, or even worse an fstab entry that seems to work
now but is for a volume that is offline when the machine reboots, I now
need to :

1) Walk down some stairs
2) Open the rack
3) Plug in a monitor and keyboard
4) Re-boot the machine as some clever hardware designer decided the VGA
display would no longer come up without reading an EDID from the monitor
first.
5) Go into admin shell
# <find issue>
# <fix issue>
# reboot

6) wait .....
7) See if is now multi user.

Go back to my office, use machine.

I am not writing this to take the p***, this really the type of thing
that people maintaining a few servers on a small scale have to consider.

> there are way bigger changes than the need to be specific in configs
Yes, but most of these are not configs most USERS have written where as
fstab entries are....


> 
> > To get this in perspective I am not complaining about the idea of not
> > going into admin if an FS is missing, it just should not BREAK the
> > previous behaviour by being the default
> 
> why?
1) Because it needlessly breaks/disrupts the way some people work.
2) Because a machine that currently works will break (fail to boot) if
the OS is updated and the fstab is left unmodified.
3) Because by failing to go multiuser the issue must now be fixed
locally adding yet more needless pain that did not exist before.


> 
> > The default for the past N years has been to continue to boot if the FS
> > is missing, that should STILL be the default.
> 
> why?
See above.


> > The flag "nofail" logic is the wrong way up.  An option should have been
> > added to indicate that the presence of a given FS is expected and
> > failure to mount should crash out to admin, the install tools should
> > then have been modified to add this option to fstab rather than changing
> > the default behaviour.
> 
> nonsense, the "nofail" was *not* invented by systemd, it existed in fact 
> long before systemd
I (and expect many others) go by the observed behaviour.

If "nofail" pre-dates systend then that is interesting, but unimportant,
as it did not, best I can tell, do anything !.  
I have never seen it in the field, so I suspect that is did nothing at
all until now?


Thanks,
Jon




More information about the systemd-devel mailing list