[systemd-devel] name=systemd cgroup mounts/hierarchy

Michal Koutný mkoutny at suse.com
Wed Nov 18 18:25:44 UTC 2020


Thanks for the details.

On Mon, Nov 16, 2020 at 09:30:20PM +0300, Andrei Enshin <b1os at bk.ru> wrote:
> I see the kubelet crash with error: «Failed to start ContainerManager failed to initialize top level QOS containers: root container [kubepods] doesn't exist»
> details:  https://github.com/kubernetes/kubernetes/issues/95488
I skimmed the issue and noticed that your setup uses 'cgroupfs' cgroup
driver. As explained in the other messages in this thread, it conflicts
with systemd operation over the root cgroup tree.

> I can see same two mounts of named systemd hierarchy from shell on the same node, simply by `$ cat /proc/self/mountinfo`
> I think kubelet is running in the «main» mount namespace which has weird named systemd mount.
I assume so as well. It may be a residual inside kubelet context when
environment was prepared for a container spawned from within this
context.

> I would like to reproduce such weird mount to understand the full
> situation and make sure I can avoid it in future.
I'm afraid you may be seeing results of various races between systemd
service (de)activation and container spawnings under the "shared" root
(both of which comprise cgroup creation/removal and migrations).
There's a reason behind the cgroup subtree delegation.

So I'd say there's not much to do from systemd side now.


Michal
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: Digital signature
URL: <https://lists.freedesktop.org/archives/systemd-devel/attachments/20201118/eae5e550/attachment.sig>


More information about the systemd-devel mailing list