[systemd-devel] question on special configuration case

Mantas Mikulėnas grawity at gmail.com
Wed Jun 8 04:34:49 UTC 2016


This sounds like you could start by unsetting WatchdogSec= for those
daemons. Other than the watchdog, they shouldn't be using any CPU unless
explicitly contacted.

On Wed, Jun 8, 2016, 02:50 Hebenstreit, Michael <
michael.hebenstreit at intel.com> wrote:

> The base system is actually pretty large (currently 1200 packages) - I
> hate that myself. Still performance wise the packages are not the issue.
> The SSDs used can easily handle that, and library loads are only happening
> once at startup (where the difference van be measured, but if the runtime
> is 24h startup time of 1s are not an issue). Kernel is tweaked, but those
> changes are relatively small.
>
> The single problem biggest problem is OS noise. Aka every cycle that the
> CPU(s) are working on anything but the application. This is caused by a
> combination of "large number of nodes" and "tightly coupled job processes".
>
> Our current (RH6) based system runs with a minimal number of demons, none
> of them taking up any CPU time unless they are used. Systemd process are
> not so well behaved. After a few hours of running they are already at a few
> seconds. On a single system - or systems working independent like server
> farms - that is not an issue. On our systems each second lost is multiplied
> by the number of nodes in the jobs (let's say 200, but it could also be up
> to 10000 or more on large installations) due to tight coupling. If 3 demons
> use 1s a day each (and this is realistic on Xeon Phi Knights Landing
> systems), that's slowing down the performance by almost 1% (3 * 200 / 86400
> = 0.7% to be exact). And - we do not gain anything from those demons after
> initial startup!
>
> My worst experience with such issues was on a cluster that lost 20%
> application performance due to a badly configured crond demon. Now I do not
> expect systemd to have such a negative impact, but even 1%, or even 0.5% of
> expected loss are too much in our case.
>
>
> -----Original Message-----
> From: Jóhann B. Guðmundsson [mailto:johannbg at gmail.com]
> Sent: Wednesday, June 08, 2016 6:10 AM
> To: Hebenstreit, Michael; Lennart Poettering
> Cc: systemd-devel at lists.freedesktop.org
> Subject: Re: [systemd-devel] question on special configuration case
>
> On 06/07/2016 10:17 PM, Hebenstreit, Michael wrote:
>
> > I understand this usage model cannot be compared to laptops or web
> > servers. But basically you are saying systemd is not usable for our
> > High Performance Computing usage case and I might better off by
> > replacing it with sysinitV. I was hoping for some simpler solution,
> > but if it's not possible then that's life. Will certainly make an
> > interesting topic at HPC conferences :P
>
> I personally would be interesting comparing your legacy sysv init setup to
> an systemd one since systemd is widely deployed on embedded devices with
> minimal build ( systemd, udevd and journald ) where systemd footprint and
> resource usage has been significantly reduced.
>
> Given that I have pretty much crawled in the entire mud bath that makes up
> the core/baseOS layer in Fedora ( which RHEL and it's clone derive from )
> when I was working on integrating systemd in the distribution I'm also
> interesting how you plan on making a minimal targeted base image which
> installs and uses just what you need from that ( dependency ) mess without
> having to rebuild those components first. ( I would think systemd
> "tweaking" came after you had solved that problem first along with
> rebuilding the kernel if your plan is to use just what you need ).
>
> JBG
> _______________________________________________
> systemd-devel mailing list
> systemd-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/systemd-devel/attachments/20160608/04da81fc/attachment.html>


More information about the systemd-devel mailing list