[systemd-devel] Bootchart speeding up boot time

Martin Townsend mtownsend1973 at gmail.com
Tue Feb 23 16:03:00 UTC 2016


I'm pretty sure they are, they are part of the Xilinx Zynq SoC platform,
from their specs
32 KB Level 1 4-way set-associative instruction and data caches
(independent for each CPU)
512 KB 8-way set-associative Level 2 cache (shared between the CPUs)

Good idea on disabling a core, this could then prove/disprove my first
theory, a bit of googling tells me that there's a Kernel boot arg 'nosmp',
I'll give this a try.

Cheers, Martin.


On Tue, Feb 23, 2016 at 3:33 PM, Umut Tezduyar Lindskog <umut at tezduyar.com>
wrote:

> On Mon, Feb 22, 2016 at 8:51 PM, Martin Townsend
> <mtownsend1973 at gmail.com> wrote:
> > Hi,
> >
> > Thanks for your reply.  I wouldn't really call this system stripped
> down, it
> > has an nginx webserver, DHCP server, postgresql-server, sftp server, a
> few
> > mono (C#) daemons running, loads quite a few kernel modules during boot,
> > dbus, sshd, avahi, and a bunch of other stuff I can't quite remember.  I
> > would imagine glibc will be a tiny portion of what gets loaded during
> boot.
> > I have another arm system which has a similar boot time with systemd,
> it's
> > only a single cortex A9 core, it's running a newer 4.1 kernel with a new
> > version of systemd as it's built with the Jethro version of Yocto so
> > probably a newer version of glibc and this doesn't speed up when using
> > bootchart and in fact slows down slightly (which is what I would expect).
> > So my current thinking is that it's either be down to the fact that it's
> a
> > dual core and only one core is being used during boot unless a fork/execl
> > occurs? Or it's down to the newer kernel/systemd/glibc or some other
> > component.
>
> Are you sure both cores have the same speed and same size of L1
> data&instruction cache?
> You could try to force the OS to run systemd on the first core by A)
> make the second one unavailable B) play with control groups and pin
> systemd to first core.
>
> Umut
>
> >
> > Is there anyway of seeing what the CPU usage for each core is for
> systemd on
> > boot without using bootchart then I can rule in/out the first idea.
> >
> > Many Thanks,
> > Martin.
> >
> >
> > On Mon, Feb 22, 2016 at 6:52 PM, Kok, Auke-jan H <
> auke-jan.h.kok at intel.com>
> > wrote:
> >>
> >> On Fri, Feb 19, 2016 at 7:15 AM, Martin Townsend
> >> <mtownsend1973 at gmail.com> wrote:
> >> > Hi,
> >> >
> >> > I'm new to systemd and have just enabled it for my Xilinx based dual
> >> > core
> >> > cortex A-9 platform.  The linux system is built using Yocto (Fido
> >> > branch)
> >> > which is using version 219 of systemd.
> >> >
> >> > The main reason for moving over to systemd was to see if we could
> >> > improve
> >> > boot times and the good news was that by just moving over to systemd
> we
> >> > halved the boot time.  So I read that I could analyse the boot times
> in
> >> > detail using bootchart so I set init=/..../bootchart in my kernel
> >> > command
> >> > line and was really suprised to see my boot time halved again.
> Thinking
> >> > some weird caching must have occurred on the first boot I reverted
> back
> >> > to
> >> > normal systemd boot and boot time jumped back to normal (around 17/18
> >> > seconds), putting bootchart back in again reduced it to ~9/10 seconds.
> >> >
> >> > So I created my own init using bootchart as a template that just slept
> >> > for
> >> > 20 seconds using nanosleep and this also had the same effect of
> speeding
> >> > up
> >> > the boot time.
> >> >
> >> > So the only difference I can see is that the kernel is not starting
> >> > /sbin/init -> /lib/systemd/systemd directly but via another program
> that
> >> > is
> >> > performing a fork and then in the parent an execl to run
> >> > /lib/systemd/systemd.  What I would really like to understand is why
> it
> >> > runs
> >> > faster when started this way?
> >>
> >>
> >> systemd-bootchart is a dynamically linked binary. In order for it to
> >> run, it needs to dynamically link and load much of glibc into memory.
> >>
> >> If your system is really stripped down, then the portion of data
> >> that's loaded from disk that is coming from glibc is relatively large,
> >> as compared to the rest of the system. In an absolute minimal system,
> >> I expect it to be well over 75% of the total data loaded from disk.
> >>
> >> It seems in your system, glibc is about 50% of the stuff that needs to
> >> be paged in from disk, hence, by starting systemd-bootchart before
> >> systemd, you've "removed" 50% of the total data to be loaded from the
> >> vision of bootchart, since, bootchart cannot start logging data until
> >> it's loaded all those glibc bits.
> >>
> >> Ultimately, your system isn't likely booting faster, you're just
> >> forcing it to load glibc before systemd starts.
> >>
> >> systemd-analyze may actually be a much better way of looking at the
> >> problem: it reports CLOCK_MONOTONIC timestamps for the various parts
> >> involved, including, possibly, firmware, kernel time, etc.. In
> >> conjunction with bootchart, this should give a full picture.
> >>
> >> Auke
> >
> >
> >
> > _______________________________________________
> > systemd-devel mailing list
> > systemd-devel at lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/systemd-devel/attachments/20160223/34c7eb2e/attachment-0001.html>


More information about the systemd-devel mailing list