[systemd-devel] more verbose debug info than systemd.log_level=debug?

Chris Murphy lists at colorremedies.com
Fri Mar 17 04:19:22 UTC 2017

I've got a Fedora 22, 23, 24, 25 bug where systemd offline updates of
kernel results in an unbootable system when on XFS only (/boot is a
directory), the system boots to a grub menu. The details of that are
in this bug's comment:


The gist of that is the file system is dirty following offline update,
and the grub.cfg is 0 length. If the fs is mounted with a rescue
system, the XFS journal is replayed and cleans things up, now there is
a valid grub.cfg, and at the next reboot there is a grub menu as
expected with the newly installed kernel.

That bug is on baremetal for another user, but I've reproduced it in a
qemu-kvm where I use boot parameters systemd.log_level=debug
systemd.log_target=console console=ttyS0,38400 and virsh console to
capture what's going on during the offline update that results in the
dirty file system.

What I get is more confusing than helpful:

Sending SIGTERM to remaining processes...
Sending SIGKILL to remaining processes...
Process 304 (plymouthd) has been marked to be excluded from killing.
It is running from the root file system, and thus likely to block
re-mounting of the root file system to read-only. Please consider
moving it into an initrd file system instead.
Unmounting file systems.
Remounting '/tmp' read-only with options 'seclabel'.
Unmounting /tmp.
Remounting '/' read-only with options 'seclabel,attr2,inode64,noquota'.
Remounting '/' read-only with options 'seclabel,attr2,inode64,noquota'.
Remounting '/' read-only with options 'seclabel,attr2,inode64,noquota'.
All filesystems unmounted.
Deactivating swaps.
All swaps deactivated.
Detaching loop devices.
device-enumerator: scan all dirs
  device-enumerator: scanning /sys/bus
  device-enumerator: scanning /sys/class
All loop devices detached.
Detaching DM devices.
device-enumerator: scan all dirs
  device-enumerator: scanning /sys/bus
  device-enumerator: scanning /sys/class
All DM devices detached.
Spawned /usr/lib/systemd/system-shutdown/mdadm.shutdown as 8408.
/usr/lib/systemd/system-shutdown/mdadm.shutdown succeeded.
system-shutdown succeeded.
Failed to read reboot parameter file: No such file or directory
[   52.963598] Unregister pv shared memory for cpu 0
[   52.965736] Unregister pv shared memory for cpu 1
[   52.966795] sd 1:0:0:0: [sda] Synchronizing SCSI cache
[   52.991220] reboot: Restarting system
[   52.993119] reboot: machine restart
<no further entries, VM shuts down>

1. Why are there three remount read-only entries? Are these failing?
These same three entries happen when the file system is Btrfs, so it's
not an XFS specific anomaly.

2. All filesystems unmounted. What condition is required to generate
this message? I guess I'm asking if it's reliable. Or if it's possible
after three failed read-only remounts that systemd gives up and claims
the file systems are unmounted, and then reboots?

There is an XFS specific problem here, as the dirty fs problem only
happens on XFS; the file system is clean if it's ext4 or Btrfs.
Nevertheless it looks like something is holding up the remount, and
there's no return value from umount logged.

Is there a way to get more information during shutdown than this? The
question at this point is why is the XFS volume dirty at reboot time,
but there's not much to go on, as I get all the same console messages
for ext4 and Btrfs which don't have a dirty fs at reboot following
offline update.


Chris Murphy

More information about the systemd-devel mailing list