[systemd-devel] multiple starts for a socket-based service

Mantas Mikulėnas grawity at gmail.com
Thu Aug 3 21:09:14 UTC 2023


On Thu, Aug 3, 2023, 21:09 Ross Boylan <rossboylan at stanfordalumni.org>
wrote:

> Hi, systemd-ers.  I'm trying to do something that seems at cross
> purposes with systemd's assumptions, and I'm hoping for some guidance.
>
> Goal: remote client sends a 1 line command to a server, which executes
> a script that does not create a long-running service.
> These events will be rare.  I believe the basic model for systemd
> sockets is that service is launched on first contact, and is then
> expected to hang around to handle later requests.  Because such
> requests are rare, I'd rather that the service exit and the process be
> repeated for later connections.
>
> Is there a way to achieve this with systemd?


> It looks as if Accept=yes in the [Socket] section might work, but I'm
> not sure about the details, and have several concerns:
> 1. systemd may attempt to restart my service script if it exits (as it
> seems to have in the logs below).

2. man systemd.socket recommends using Accept=no "for performance
> reasons".  But that seems to imply the service that is activated must
> hang around to handle future requests.
>

That's precisely the performance reason, at least historically. Spawning a
whole new process is heavier than having a running daemon that only needs
to fork, at most (and more commonly just starts a thread or even uses a
single-threaded event loop), so while it is perfectly fine for your case,
it's not recommended for "production" servers.

(Especially if the time needed to load up e.g. a python interpreter, import
all the modules, etc. might be more than the time needed for the script to
actually perform its task...)

3. Accept=yes seems to imply that several instances of the service
> could share the same socket, which seems dicey.  Is systemd
> automatically doing the handshake for TCP sockets,


It does, that's what "accept" means in socket programming.

However, accept() does not reuse the same socket – each accepted connection
spawns a *new* socket, and with Accept=yes it's that "per connection"
socket that's passed on to the service. The original "listening" socket
stays within systemd and is not used for communications, its only purpose
is to wait for new connections to arrive.

This is no different from how you'd handle multiple concurrent connections
in your own program – you create a base "listener" socket, bind it, then
each accept(listener) generates a new "client" socket.

With systemd, the default mode of Accept=no provides your service with the
"listener" but it is the service's job to loop over accept()-ing any number
of clients it wants.

For what it's worth, systemd's .socket units are based on the traditional
Unix "inetd" service that used to have the same two modes (Accept=yes was
called "nowait"). If systemd doesn't quite work for you, you can still
install and use "xinetd" on most Linux distros (and probably five or six
other similar superservers).

in which the server
> generates a new port, communicates it to the client, and it is this
> second port that gets handed to the service?  Or is the expectation
> that the client service will do that early on and close the original
> port?
>

That's not how the TCP handshake works at all. (It's how Arpanet protocols
worked fifty years ago, back when TCP/IP did not exist yet and RFCs were
numbered in the double digits.)

In TCP it is never necessary for a server to generate a new port for each
client, because the client *brings its own* port number – there's one for
each end, and the combination of <client port, server port> is what allows
distinguishing multiple concurrent connections. The sockets returned by
accept() are automatically associated with the correct endpoints.

If you look at the list of active sockets in `netstat -nt` or `ss -nt` (add
-l or -a to also show the listeners), all connections to your server stay
on the same port 14987, but the 4-value <src:sport, dst:dport> is unique
for each.

Although calls to the service should be rare, it's easy to imagine
> that during development I inadvertently generate rapidly repeated
> calls so that several live processes could end up accessing the same
> socket.
>
> Finally, I'd like to ignore rapid bursts of requests.  Most systemd
> limits seem to put the unit in a permanently failed state if that
> happens, but I would prefer if that didn't happen, ie. ignore but
> continue, rather than ignore and fail.


> My first attempt follows.  It appears to have generated 5 quick
> invocations of the script and then permanent failure of the service
> with service-start-limit-hit.  So even when the burst window ended,
> the service remained down.  I think what happened was that when my
> script finished the service unit took that as a failure (? the logs do
> show the service succeeding, though RemainAfterExit=no)  and tried to
> restart it.  It did this 5 times and then hit a default limit.  Since
> the service and the socket were then considered failed, no more
> traffic on the socket triggered any action.
>

No; more likely what happened is that you forgot Accept=yes in the socket
unit, so systemd assumed that your service will be the one to accept the
client connections. As it didn't do so at all, the listener socket
continued to have the "clients are waiting to be accepted" event raised
after the script exited, which naturally caused systemd to activate the
service again.

With Accept=yes, systemd would be the one doing this – your service could
just do something and exit (automatically closing the connection). Note
that Accept=yes requires it to be a multi-instance "family at .service".


> Here are the last few log entries:
> Aug 01 01:42:55 barley systemd[1]: Finished Update kernel netboot info
> for family system.
> Aug 01 01:42:55 barley systemd[1]: Starting Update kernel netboot info
> for family system...
> Aug 01 01:42:55 barley systemd[1]: family.service: Succeeded.
> Aug 01 01:42:55 barley systemd[1]: Finished Update kernel netboot info
> for family system.
> Aug 01 01:42:55 barley systemd[1]: family.service: Start request
> repeated too quickly.
> Aug 01 01:42:55 barley systemd[1]: family.service: Failed with result
> 'start-limit-hit'.
> Aug 01 01:42:55 barley systemd[1]: Failed to start Update kernel
> netboot info for family system.
> Aug 01 01:42:55 barley systemd[1]: family.socket: Failed with result
> 'service-start-limit-hit'.
>
> And the config files
> # /etc/systemd/system/family.service
> [Unit]
> Description=Update kernel netboot info for family system
>
> [Service]
> Type=oneshot
>

oneshot doesn't really fit socket-activated units; simple (or exec) would
make more sense – even for short-lived tasks.

# next is the default. will be considered failed, and since we want it
> # to run multiple times that is good. [Well, maybe not]
> RemainAfterExit=no
> ExecStart=sh -c "date >> /root/washere"  #for testing
>
> [Install]
> WantedBy=network-online.target
>

As you want this to be per-client (i.e. Accept=yes in the socket),
[Install]-ing the service anywhere doesn't make sense as instances of such
a service will be passed the per-connection sockets...which don't exist yet
if the service is to be started from network-online.target.

You don't need an [Install] for socket activated services, in general.


> # /etc/systemd/system/family.socket
> [Unit]
> Description=Socket to tickle to update family netboot config
>
> [Install]
> WantedBy=network-online.target


> [Socket]
> ListenStream=192.168.1.10:14987
> BindToDevice=br0
> # 2s is default
> TriggerLimitIntervalSec=5s
>
> Thanks.
> Ross
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/systemd-devel/attachments/20230804/da0242b3/attachment.htm>


More information about the systemd-devel mailing list