[systemd-devel] Socket is dying, how to automatically restart it?

Koen Kooi koen at dominion.thruhere.net
Thu Apr 18 00:28:57 PDT 2013


Op 17 apr. 2013, om 21:05 heeft Lennart Poettering <lennart at poettering.net> het volgende geschreven:

> On Wed, 10.04.13 19:03, Koen Kooi (koen at dominion.thruhere.net) wrote:
> 
>> Hi,
>> 
>> I have a bit of a heisenbug where dropbear.socket will just die and
>> needs a systemctl restart dropbear.socket. I can't tell why it's
>> dying, just that it does within 3 days of uptime. After restarting it
>> it seems to be rock solid again for at least some weeks.
>> 
>> The real way to fix this is to find out why it dies, but till someone
>> figures that out I'm looking to a way to automatically restart the
>> socket when it fails, kinda like Restart=Always for services. Is such
>> a thing possible? This is with 195 and 196, haven't tried 201 yet.
> 
> So, two things:
> 
> When a service continuously dies we'll put the listening socket into
> fail state eventually. But you can see these ones easily in "systemctl
> status", since they have a specific result
> "service-failed-permanent". (results are shown next to the "Active:"
> field if a unit failed).
> 
> And if the somebody invokes shutdown() on the listening socket (not the
> connection socket), but that's a really weird thing to do. But people do
> weird things, and this has occured before.
> 
> Otherwise I have no idea what could have happened. Any chance you can
> reproduce this with strace attached to PID 1 or so?

Still trying to reproduce it in a way I can instrument it.

> Is dropbear forked off one instance per connection, or one instance for
> all?

Looks like one instance per connection. 

But I'm going to replace dropbear with openssh in the medium term because dropbear doesn't do enough PAM to register itself with logind, so things like 'screen' get killed on logout.

regards,

Koen


More information about the systemd-devel mailing list