[systemd-devel] mount failed during system start but after "systemctl daemon-reload" everything works
Oliver
oliver at business-security.de
Fri Apr 18 01:32:16 PDT 2014
Hello.
Could anyone tell me a reason why a mount (regardless of via fstab or
"mountpoint.mount" unit file) during system boot leads to a timeout
because of device timeout and after i do a "systemctl daemon-reload" the
mount is successful?
Detailed information:
My system is a Linuxfromscratch 7.5 (so no "real" distribution -
everything is self-compiled) and it runs as a paravirtualized Xen DomU.
Therefore the block devices are /dev/xvda1 and /dev/xvdb1.
The first is the root fs and mount and remount are okay. Then the second
block device should mount and it timed out with "Dependency failed" and
"dev-xvdb1.device/start timed out"
When I run "udevadm info /dev/xvdb1" everything seems to be okay, but
any try of mount this via systemd failes. When I mount manually via
"mount /dev/xvdb1 /mountpoint" it's fine. Then "systemctl status
mountpoint.mount" says "active".
Manually unmount is okay and after this a mount via systemd failes again.
If I do, and only if I do "systemctl daemon-reload" and then "systemctl
start mountpoint.mount" it works.
I'm a beginner with a systemd based system and do not know much about
the internals. What could lead to this behaviour? Is it possible that I
do anything wrong?
Please help. I'm very frustrated. If you need more Input, please tell me.
Best regards
Oliver
More information about the systemd-devel
mailing list