[systemd-devel] PredictableInterfaceNames and Debian

Xen list at xenhideout.nl
Sat Apr 16 16:46:38 UTC 2016


Reindl was trying to give me links on Debian.

Andrei alluded to the fact that the infrastructure has been removed.

Indeed in systemd 215 in Debian the file
/lib/udev/rules.d/75-persistent-net-generator.rules still exists, but in
220 it was removed.

The original solution was a persistent file written to /etc.

Every other operating system probably uses a persistent state.

Three reasons were named:

- race conditions in renaming to the same namespace
- unwriteable /etc
- doesn't work in VMs

If writing persistent configuration information to any filesystem is an
impossibility anyway, you have a very limited system to begin with.

This is a kind of design choice that steers the directions you can go,
and if the design choice is inelegant, so too will be your solutions.

Personally I think it is a bad -- a terribly bad -- choice to begin with.

Persistence is the obvious solution but now you're not allowed to use it
and then people say that the (kernel) devs know what they are doing.

A system that cannot save autoconfiguration data is just a flawed system.

The model arises in which only an administrator can configure something
by willful choice, but the system cannot aid you in that.

The system therefore cannot remember what devices existed on the
previous boot.

At once that is the strength of Linux: you can basically run a system on
any hardware and it will basically run.

And it is the horror that plagues windows, as a Windows installation can
only be migrated to different hardware with specialist tools.

It is not really inherent in the design choice to save persistent data
though, but in your inability to cope with changing conditions.

Most of what those tools do that I mentioned is fixing missing
drivers........

As far as I can tell anyway.

Basically from my perspective the solution that existed was perfect
although the udev rule syntax is not that easy for a user to change or
understand.

But now we have a solution that revolves around not being able or
allowed to write persistent data.

And in order to be persistent now it is attempted to use stuff that
normally doesn't change, except when you add or remove hardware, then it
does change, and the resulting system does not alias anything but just
encodes the addresses into the names.

In order to solve a problem without being able to use the real solution.

All because we don't want or sometimes cannot have writable filesystems
at boot.

So the stuff I wrote in ": Solution for a desktop OS" was merely meant
to simulate something that would resemble real persistency by increasing
the chances that the boot process would end up with the same solution
every time. Simply by renaming stuff after all onboard/pci non-pluggable
devices had (hopefully) been found. Not a real solution, but the closest
I can think of given that a real solution is not possible (or has been
removed).

And why networking has to be online before the root fsck has been
completed: I don't know.

And if you did need to write stuff: why not queue the changes and then
apply them after the remount rw, if that is necessary? Create an overlay
to /etc ;-), write stuff to the overlay, then remount, freeze access,
and write the changed files to the real /etc ;-).

"Changes are being held until a moderator has approved it".

I'm just saying here it's rather obvious you'd end up with something
ugly if you could not use the solution that was intended for such things.

Now you require users to go out of their way manually to write changes
to /etc using ill-understood formats or hard-to-find man pages.

Without ANY of the advantages of a system (such as udev, I guess) that
can dynamically update the device list, remove devices that are no
longer there, add new ones, or move them from one hardware position to
another.

And then you say you have created the best solution. But it is so
dreadfully lacking. And then some of you say : Well, if you want it, you
can do it on your own!!!!!.

Sure, I can do what you haven't been able to or that you didn't choose
to enable. And even removed from existence.

A flaw in design will not be so bad if the system is small. The bigger
the system is that is built on a flawed design, the bigger the flaws
will become.

LVM can merge snapshots back into the main filesystem, why can't you? I
mean - postponed writing.

The whole /run -> /var/run was something people riled against as well.

I am saying that from my perspective this whole ro shit is causing a
bagload of trouble. "/var may be on another filesystem that is not
mounted yet" well yeah, then just fix that issue by creating a dedicated
writable /var that always needs to be present. Ensure that some piece of
the root filesystem or its essential "neighbours" is going to be
writable. People mount /var elsewhere but probably mostly for writing
log files or keeping databases or whatever. That mixes up a lot of
goals. If the root filesystem is writable, use the root filesystem. If
it is not, use something else that is. But don't do away with real,
persistent, writable filesystems.

Keep a place where the core system can maintain persistent data.
Separate "core" from "user" if you must. The /run we have now did create
a seperation with "user" mode /var files, but we lost real persistent
writability!!.

So it seems Linux has been moving in the direction of really losing that
ability.

Problem: the kernel/boot system cannot write persistent data.

SOLVE THAT.

There are only three ways:

1. Postpone writing as described
2. Ensure a real writeable filesystem from the moment it is required
3. Give up and explode and try again next time.

I think the mainline kernel has what is known as "overlayFS" -- if you
enable an atomic remount operation that merges the overlay with the now
rw filesystem beneath, you will have solved the issue in a general sense.

The only other solution falls along the lines of:
- Just don't do anything until you have fscked
- That almost sounds like fucked.

- Create a form of independent filesystem that can be independently
checked (such as /boot).
- Other solutions require massive requirements (add another partition,
require the use of LVM, require the use of btrfs
probably/perhaps/likely, all things that require the enablement of
contested technologies or unnecessary or unwanted extras).

>From these solutions requiring an independent /boot and writing to IT is
probably the most likeable and achieveable.

So basically you *could* combine overlayFS with tempFS (tmpfs) and
enable automatic merging on atomic remount. Maybe not as pretty. But
could work.

The only other thing that comes to mind is: why on EARTH is it so
difficult to fsck a mounted filesystem when I don't remember having done
anything else ever in MS Windows? Today some operations are scheduled
for reboot, true. But you could for instance easily defrag while the
system was in use and many other thing were possible. It was NEVER as
hard as it is in Linux. However Windows can fsck during boot just fine
and still keep persistent network records or whatever it is doing.

So I don't know much and I don't know everything and sometimes I don't
know anything at all, but this is just what it seems to me.

Regards, Bart Schouten.


More information about the systemd-devel mailing list