[systemd-devel] We are working on trying to scale up to > 1000 containers.
Lennart Poettering
lennart at poettering.net
Thu Jun 20 12:23:30 PDT 2013
On Tue, 18.06.13 09:11, Daniel J Walsh (dwalsh at redhat.com) wrote:
> One concern we have is what will happen to systemd if we start 1000 services
> at boot.
>
> systemctl start httpd_sandbox.target
>
> For example.
>
> Is there anything we can do to throttle the start of so many unit files. Or
> would systemd do something itself.
So, we have rate limits on some things. We maintain per-service
ratelimits, and a ratelimit in the main even loop. However, that's
really just a last resort thing. Basically, if the event loop spins more
often than 50.000 times per second we will just totally block execution
for 1s. So things get awfully slow when we do too much stuff so that we
don't consumer 100% CPU forever, and that's all.
I have no experience with running this many services on a machine. I am
sure we can add various bits here and there to make sure things scale
nicely for this. But for that I'd really like some performance data
first, i.e. what actually really happens with the current code.
Also, let me get this right: this is about not overloading the kernel
with starting up too many processes at the same time? Is this really a
problem? I figured our kernel these days wouldn't have much problems
with loads like this...
We have a queue of jobs we need to execute. This jobs basically map to
processes we start. We could certainly add something that throttles
dispatching of this queue if we dispatch too many of them in a short
time. With such an approach we'd continue to run the main event loop as
normal, but simply pause processing of the job queue for a while.
Lennart
--
Lennart Poettering - Red Hat, Inc.
More information about the systemd-devel
mailing list