[systemd-devel] Antw: [EXT] Re: Still confused with socket activation
Ulrich Windl
Ulrich.Windl at rz.uni-regensburg.de
Fri Feb 5 06:55:24 UTC 2021
>>> Ulrich Windl schrieb am 03.02.2021 um 10:34 in Nachricht <601A6E3D.E40 :
161 :
60728>:
>>>> Lennart Poettering <lennart at poettering.net> schrieb am 02.02.2021 um
15:59 in
> Nachricht <20210202145954.GB36677 at gardel-login>:
> > On Di, 02.02.21 10:43, Ulrich Windl (Ulrich.Windl at rz.uni‑regensburg.de)
> wrote:
> >
> >> Hi!
> >>
> >> Having:
> >> ‑‑‑
> >> # /usr/lib/systemd/system/virtlockd.service
> >> [Unit]
> >> Description=Virtual machine lock manager
> >> Requires=virtlockd.socket
> >> Requires=virtlockd‑admin.socket
> >> Before=libvirtd.service
> >> ...
> >> ‑‑‑
> >>
> >> How would I start both sockets successfully unter program control?
> >> If I start one socket, I cannot start the other without an error (as
> > libvirtd.service is running already, see my earlier message from last
week).
> >> If I mask the socket units, I cannot start the libvirtd.service.
> >> So would I disable the socket units and start libvirtd.service?
> >> Unfortunately if someone (update when vendor‑preset is enabled)
re‑enables
> the
> > socket units, it would break things, so I tried to mask them, but that
> > failed, too.
> >> error: Could not issue start for prm_virtlockd: Unit virtlockd.socket is
> > masked.
> >
> > I don't grok what you are trying to say, the excerpt of the unit file
> > is too short. Please provide the relevant parts of the other unit
> > files too.
>
> So you get it:
>
>
> # systemctl cat virtlockd.service
> # /usr/lib/systemd/system/virtlockd.service
> [Unit]
> Description=Virtual machine lock manager
> Requires=virtlockd.socket
> Requires=virtlockd-admin.socket
> Before=libvirtd.service
> Documentation=man:virtlockd(8)
> Documentation=https://libvirt.org
>
> [Service]
> EnvironmentFile=-/etc/sysconfig/virtlockd
> ExecStart=/usr/sbin/virtlockd $VIRTLOCKD_ARGS
> ExecReload=/bin/kill -USR1 $MAINPID
> # Loosing the locks is a really bad thing that will
> # cause the machine to be fenced (rebooted), so make
> # sure we discourage OOM killer
> OOMScoreAdjust=-900
> # Needs to allow for max guests * average disks per guest
> # libvirtd.service written to expect 4096 guests, so if we
> # allow for 10 disks per guest, we get:
> LimitNOFILE=40960
>
> [Install]
> Also=virtlockd.socket
>
> # /run/systemd/system/virtlockd.service.d/50-pacemaker.conf
> [Unit]
> Description=Cluster Controlled virtlockd
> Before=pacemaker.service pacemaker_remote.service
>
> [Service]
> Restart=no
>
> # systemctl cat virtlockd.socket
> # /usr/lib/systemd/system/virtlockd.socket
> [Unit]
> Description=Virtual machine lock manager socket
> Before=libvirtd.service
>
> [Socket]
> ListenStream=/run/libvirt/virtlockd-sock
> SocketMode=0600
>
> [Install]
> WantedBy=sockets.target
>
> # /usr/lib/systemd/system/virtlockd-admin.socket
> [Unit]
> Description=Virtual machine lock manager admin socket
> Before=libvirtd.service
> BindsTo=virtlockd.socket
> After=virtlockd.socket
>
> [Socket]
> ListenStream=/run/libvirt/virtlockd-admin-sock
> Service=virtlockd.service
> SocketMode=0600
>
> [Install]
> WantedBy=sockets.target
>
> To make things worse: libvirtd also requires virtlockd:
>
> # systemctl cat libvirtd.service
> # /usr/lib/systemd/system/libvirtd.service
> [Unit]
> Description=Virtualization daemon
> Requires=virtlogd.socket
> Requires=virtlockd.socket
> # Use Wants instead of Requires so that users
> # can disable these three .socket units to revert
> # to a traditional non-activation deployment setup
> Wants=libvirtd.socket
> Wants=libvirtd-ro.socket
> Wants=libvirtd-admin.socket
> Wants=systemd-machined.service
> Before=libvirt-guests.service
> After=network.target
> After=dbus.service
> After=iscsid.service
> After=apparmor.service
> After=local-fs.target
> After=remote-fs.target
> After=systemd-logind.service
> After=systemd-machined.service
> After=xencommons.service
> Conflicts=xendomains.service
> Documentation=man:libvirtd(8)
> Documentation=https://libvirt.org
>
> [Service]
> Type=notify
> EnvironmentFile=-/etc/sysconfig/libvirtd
> ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS
> ExecReload=/bin/kill -HUP $MAINPID
> KillMode=process
> Restart=on-failure
> # At least 1 FD per guest, often 2 (eg qemu monitor + qemu agent).
> # eg if we want to support 4096 guests, we'll typically need 8192 FDs
> # If changing this, also consider virtlogd.service & virtlockd.service
> # limits which are also related to number of guests
> LimitNOFILE=8192
> # The cgroups pids controller can limit the number of tasks started by
> # the daemon, which can limit the number of domains for some hypervisors.
> # A conservative default of 8 tasks per guest results in a TasksMax of
> # 32k to support 4096 guests.
> TasksMax=32768
>
> [Install]
> WantedBy=multi-user.target
> Also=virtlockd.socket
> Also=virtlogd.socket
> Also=libvirtd.socket
> Also=libvirtd-ro.socket
>
> # systemctl cat libvirtd.socket
> # /usr/lib/systemd/system/libvirtd.socket
> [Unit]
> Description=Libvirt local socket
> Before=libvirtd.service
>
>
> [Socket]
> # The directory must match the /etc/libvirt/libvirtd.conf unix_sock_dir
> setting
> # when using systemd version < 227
> ListenStream=/run/libvirt/libvirt-sock
> Service=libvirtd.service
> SocketMode=0666
>
> [Install]
> WantedBy=sockets.target
>
> # systemctl cat libvirtd-admin.socket
> # /usr/lib/systemd/system/libvirtd-admin.socket
> [Unit]
> Description=Libvirt admin socket
> Before=libvirtd.service
> BindsTo=libvirtd.socket
> After=libvirtd.socket
>
>
> [Socket]
> # The directory must match the /etc/libvirt/libvirtd.conf unix_sock_dir
> setting
> # when using systemd version < 227
> ListenStream=/run/libvirt/libvirt-admin-sock
> Service=libvirtd.service
> SocketMode=0600
>
> [Install]
> WantedBy=sockets.target
>
> # systemctl cat libvirtd-ro.socket
> # /usr/lib/systemd/system/libvirtd-ro.socket
> [Unit]
> Description=Libvirt local read-only socket
> Before=libvirtd.service
> BindsTo=libvirtd.socket
> After=libvirtd.socket
>
>
> [Socket]
> # The directory must match the /etc/libvirt/libvirtd.conf unix_sock_dir
> setting
> # when using systemd version < 227
> ListenStream=/run/libvirt/libvirt-sock-ro
> Service=libvirtd.service
> SocketMode=0666
>
> [Install]
> WantedBy=sockets.target
>
> # systemctl cat libvirtd-tcp.socket
> # /usr/lib/systemd/system/libvirtd-tcp.socket
> [Unit]
> Description=Libvirt non-TLS IP socket
> Before=libvirtd.service
> BindsTo=libvirtd.socket
> After=libvirtd.socket
>
>
> [Socket]
> # This must match the /etc/libvirt/libvirtd.conf tcp_port setting
> # when using systemd version < 227
> ListenStream=16509
> Service=libvirtd.service
>
> [Install]
> WantedBy=sockets.target
>
> # systemctl cat libvirtd-tls.socket
> # /usr/lib/systemd/system/libvirtd-tls.socket
> [Unit]
> Description=Libvirt TLS IP socket
> Before=libvirtd.service
> BindsTo=libvirtd.socket
> After=libvirtd.socket
>
>
> [Socket]
> # This must match the /etc/libvirt/libvirtd.conf tls_port setting
> # when using systemd version < 227
> ListenStream=16514
> Service=libvirtd.service
>
> [Install]
> WantedBy=sockets.target
>
> (You asked for it; you got it ;-)
>
> >
> > Masking is a big hammer, the last resort. It should not be part of the
> > usual workflow.
> >
> > Note that Requires= in almost all cases should be combined with an
> > order dep of After= onto the same unit. If the unit above doesn't do
> > that it's almost certainly broken.
>
> The mess is that systemd starts the services when it should not:
>
> Feb 02 12:09:09 h18 systemd[1]: Starting Virtualization daemon...
> Feb 02 12:09:09 h18 systemd[1]: Started Virtualization daemon.
> Feb 02 12:09:09 h18 systemd[1]: Started Virtual machine lock manager.
>
> The actual start should happen later:
> Feb 02 12:09:11 h18 pacemaker-execd[18833]: notice: executing -
> rsc:prm_virtlockd action:start call_id:88
> Feb 02 12:09:14 h18 pacemaker-execd[18833]: notice: executing -
> rsc:prm_libvirtd action:start call_id:90
>
> The reason is the virtlockd uses a filesystem that has to be mounted first.
> And it should be terminating before the filesystem is unmounted.
>
> The status is:
> # systemctl status libvirtd\* | grep Loaded
> Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; disabled;
> vendor preset: enabled)
> Loaded: loaded (/usr/lib/systemd/system/libvirtd-admin.socket; disabled;
> vendor preset: disabled)
> Loaded: loaded (/usr/lib/systemd/system/libvirtd.socket; disabled; vendor
> preset: disabled)
> Loaded: loaded (/usr/lib/systemd/system/libvirtd-ro.socket; disabled;
> vendor preset: disabled)
> # systemctl status virtlock\* | grep Loaded
> Loaded: loaded (/usr/lib/systemd/system/virtlockd.service; indirect;
> vendor preset: disabled)
> Loaded: loaded (/usr/lib/systemd/system/virtlockd.socket; disabled;
> vendor preset: disabled)
> Loaded: loaded (/usr/lib/systemd/system/virtlockd-admin.socket; disabled;
> vendor preset: disabled)
>
> So everything is disabled, but somehow it still starts automatically...
I found out that that the services seem to start "indirect"ly (I couldn't find
documenation for that), and there exist "Drop-In"s in /run/systemd/syste/...
where I could not find out who creates those:
h19:~ # systemctl status virtlockd
● virtlockd.service - Cluster Controlled virtlockd
Loaded: loaded (/usr/lib/systemd/system/virtlockd.service; indirect; vendor
preset: disabled)
Drop-In: /run/systemd/system/virtlockd.service.d
└─50-pacemaker.conf
Active: active (running) since Thu 2021-02-04 15:41:25 CET; 16h ago
Docs: man:virtlockd(8)
https://libvirt.org
Main PID: 8764 (virtlockd)
Tasks: 1
CGroup: /system.slice/virtlockd.service
└─8764 /usr/sbin/virtlockd
Feb 04 15:41:25 rksaph19 systemd[1]: Started Cluster Controlled virtlockd.
/run/systemd/system/libvirtd.service.service.d/50-pacemaker.conf:
[Unit]
Description=Cluster Controlled libvirtd.service
Before=pacemaker.service pacemaker_remote.service
[Service]
Restart=no
/run/systemd/system/virtlockd.service.d/50-pacemaker.conf:
[Unit]
Description=Cluster Controlled virtlockd
Before=pacemaker.service pacemaker_remote.service
[Service]
Restart=no
Regards,
Ulrich
...
More information about the systemd-devel
mailing list