[systemd-devel] How to factory reset?

Chris Murphy lists at colorremedies.com
Wed Mar 11 11:45:50 PDT 2015

On Wed, Mar 11, 2015 at 11:50 AM, Kay Sievers <kay at vrfy.org> wrote:
> On Wed, Mar 11, 2015 at 6:32 PM, Chris Murphy <lists at colorremedies.com> wrote:
>> The bootloader configuration files aren't signed. Maybe the should be.
> With systemd-boot, there will be no config to sign:
>   https://harald.hoyer.xyz/2015/02/25/single-uefi-executable-for-kernelinitrdcmdline/

That's definitely an improvement.

>> And maybe done away with in favor of dynamic discovery and "hot" keys
>> for indicating common boot options.
> The "all included" kernels are found at /boot/EFI/Linux/*.efi

Yeah until the distros stop persistently mounting the ESP, I'm not a
fan at all of anything but the most minimalist approach to the ESP.
The FAT kernel maintainer says it's a bad idea, pretty much any crash
or panic while the ESP is mounted, even ro, can cause FAT corruption
and there's nothing to be done about it (well, fsck it at every boot
might help some, which also some distros don't ever do).

>> Any general purpose solution
>> should account for degraded bootable raid, which means each ESP needs
>> to be identical. Either each ESP bootloader looks to a single location
>> on raid for configuration, or uses dynamic discovery, or some system
>> of sequentially updating each ESP needs to be devised.
> We get that transparently from firmwares with "bios raid" support.

a.) such support lacks widespread availability
b.) Intel IMSM requires mdadm to manage it once the kernel is running,
and I'm not aware of any support for AMD's equivalent on Linux
c.) Intel IMSM is an all or nothing approach to whole devices, they
first go into a container, making both LVM and Btrfs raid impossible
on those devices.

> We
> will not care about any sort of conventional "software raid", because
> the firmware itself will not handle it, and it makes nt much sense to
> use over-complicated options in the later boot steps when it cannot
> recover itself anyway.

OK except this has worked just fine on BIOS systems for years and they
recover OK.

GRUB2's mdadm supports degraded 1,10, 4, 5, 6 booting. The identical
core.img is embedded in the MBR gap or BIOSBoot partition in each
drive. Each core.img looks to the same e.g. /boot/grub location on the
array, and can even find the kernel and initramfs on degraded raid6. I
don't care for that usecase especially, but it does work, completely
unattended. And at least a two disk raid1 degraded boot ought to work,

> For a single-system disk, the entire /boot, ESP content should rather be
> seen as throw-way content which can be re-constructed from a running
> system, from the content in /usr, at any given time.

I agree with this. But it should be a very simple additional step to
apply this to all ESP's on the system to make sure all of them enable
the system to find a kernel and boot the system.

>There is no
> point in handling raid without native firmware support; manual
> intervention is needed anyway on these systems if things go wrong, and
> that step can just re-create the ESP content if needed.

I don't agree with this. OS X supports this for 10+ years on EFI with
software raid, and no such thing as firmware raid. No manual
intervention is required in the degraded boot case. I'm not exactly
sure how Windows works, but they have bootable mirroring software raid
that doesn't depend on firmware raid.

It's not general purpose to have to depend on proprietary firmware
RAID to get UEFI resilient boot. This wasn't necessary on BIOS
systems, there's no good reason to require it now.

Chris Murphy

More information about the systemd-devel mailing list