[systemd-devel] Bootchart speeding up boot time

Martin Townsend mtownsend1973 at gmail.com
Thu Feb 25 16:06:08 UTC 2016


A bit of an update. disabling the second core didn't make much difference,
a couple of seconds max.
I played around with my own init task based on bootchart and tracked it
down to the fact that nanosleep was being called.  I basically have the
following code below which gives me the same boot time improvement.
If I take out the nanosleep then boot times slow down.  If I put say 3
seconds in the nanosleep the boot times speed up for 3 seconds and then
slow down.  I have no explanation as to why, the only thing I know about
nanosleep is that it's a syscall and it uses hrtimers.    If anyone has
experience in this area and might be able to shed some light on this
problem it would be very appreciated?
I first thought that the hrtimer might have an effect on the scheduler
somehow or the idle dynticks was broken somehow but after disabling dyn
ticks completely and upping the periodic rate to 1000HZ it made no
difference to boot times.
I would like to understand why this is happening, systemd probably isn't
the right forum so would also appreciate any pointers to where I may get
some answers.

Out of interest does anyone else see this behaviour with a 3.14 Kernel?

- Martin.

int main(int argc, char *argv[]) {
        /*
         * If the kernel executed us through
init=/usr/lib/systemd/systemd-bootchart, then
         * fork:
         * - parent execs executable specified via init_path[]
(/usr/lib/systemd/systemd by default) as pid=1
         * - child logs data
         */
        if (getpid() == 1) {
                pid_t pid;

                pid = fork();
                if (pid) {
                        /* parent */
                        if (pid < 0)
                                fprintf(stderr, "Failed to create child\n");
                        execl("/lib/systemd/systemd",
"/lib/systemd/systemd", NULL);
                } else if(pid == 0) {
                        struct timespec req;
                        int res;

                        req.tv_sec = 20;
                        req.tv_nsec = 0;

                        /* TODO: Catch interruption and carry on sleeping */
                        res = nanosleep(&req, NULL);
                        exit(EXIT_SUCCESS);
                }
        } else {
                fprintf(stderr,
                        "Failed to start init\n"
                        "Must be started as PID 1.\n");
                exit (EXIT_FAILURE);
        }
        return 0;
}



On Tue, Feb 23, 2016 at 3:33 PM, Umut Tezduyar Lindskog <umut at tezduyar.com>
wrote:

> On Mon, Feb 22, 2016 at 8:51 PM, Martin Townsend
> <mtownsend1973 at gmail.com> wrote:
> > Hi,
> >
> > Thanks for your reply.  I wouldn't really call this system stripped
> down, it
> > has an nginx webserver, DHCP server, postgresql-server, sftp server, a
> few
> > mono (C#) daemons running, loads quite a few kernel modules during boot,
> > dbus, sshd, avahi, and a bunch of other stuff I can't quite remember.  I
> > would imagine glibc will be a tiny portion of what gets loaded during
> boot.
> > I have another arm system which has a similar boot time with systemd,
> it's
> > only a single cortex A9 core, it's running a newer 4.1 kernel with a new
> > version of systemd as it's built with the Jethro version of Yocto so
> > probably a newer version of glibc and this doesn't speed up when using
> > bootchart and in fact slows down slightly (which is what I would expect).
> > So my current thinking is that it's either be down to the fact that it's
> a
> > dual core and only one core is being used during boot unless a fork/execl
> > occurs? Or it's down to the newer kernel/systemd/glibc or some other
> > component.
>
> Are you sure both cores have the same speed and same size of L1
> data&instruction cache?
> You could try to force the OS to run systemd on the first core by A)
> make the second one unavailable B) play with control groups and pin
> systemd to first core.
>
> Umut
>
> >
> > Is there anyway of seeing what the CPU usage for each core is for
> systemd on
> > boot without using bootchart then I can rule in/out the first idea.
> >
> > Many Thanks,
> > Martin.
> >
> >
> > On Mon, Feb 22, 2016 at 6:52 PM, Kok, Auke-jan H <
> auke-jan.h.kok at intel.com>
> > wrote:
> >>
> >> On Fri, Feb 19, 2016 at 7:15 AM, Martin Townsend
> >> <mtownsend1973 at gmail.com> wrote:
> >> > Hi,
> >> >
> >> > I'm new to systemd and have just enabled it for my Xilinx based dual
> >> > core
> >> > cortex A-9 platform.  The linux system is built using Yocto (Fido
> >> > branch)
> >> > which is using version 219 of systemd.
> >> >
> >> > The main reason for moving over to systemd was to see if we could
> >> > improve
> >> > boot times and the good news was that by just moving over to systemd
> we
> >> > halved the boot time.  So I read that I could analyse the boot times
> in
> >> > detail using bootchart so I set init=/..../bootchart in my kernel
> >> > command
> >> > line and was really suprised to see my boot time halved again.
> Thinking
> >> > some weird caching must have occurred on the first boot I reverted
> back
> >> > to
> >> > normal systemd boot and boot time jumped back to normal (around 17/18
> >> > seconds), putting bootchart back in again reduced it to ~9/10 seconds.
> >> >
> >> > So I created my own init using bootchart as a template that just slept
> >> > for
> >> > 20 seconds using nanosleep and this also had the same effect of
> speeding
> >> > up
> >> > the boot time.
> >> >
> >> > So the only difference I can see is that the kernel is not starting
> >> > /sbin/init -> /lib/systemd/systemd directly but via another program
> that
> >> > is
> >> > performing a fork and then in the parent an execl to run
> >> > /lib/systemd/systemd.  What I would really like to understand is why
> it
> >> > runs
> >> > faster when started this way?
> >>
> >>
> >> systemd-bootchart is a dynamically linked binary. In order for it to
> >> run, it needs to dynamically link and load much of glibc into memory.
> >>
> >> If your system is really stripped down, then the portion of data
> >> that's loaded from disk that is coming from glibc is relatively large,
> >> as compared to the rest of the system. In an absolute minimal system,
> >> I expect it to be well over 75% of the total data loaded from disk.
> >>
> >> It seems in your system, glibc is about 50% of the stuff that needs to
> >> be paged in from disk, hence, by starting systemd-bootchart before
> >> systemd, you've "removed" 50% of the total data to be loaded from the
> >> vision of bootchart, since, bootchart cannot start logging data until
> >> it's loaded all those glibc bits.
> >>
> >> Ultimately, your system isn't likely booting faster, you're just
> >> forcing it to load glibc before systemd starts.
> >>
> >> systemd-analyze may actually be a much better way of looking at the
> >> problem: it reports CLOCK_MONOTONIC timestamps for the various parts
> >> involved, including, possibly, firmware, kernel time, etc.. In
> >> conjunction with bootchart, this should give a full picture.
> >>
> >> Auke
> >
> >
> >
> > _______________________________________________
> > systemd-devel mailing list
> > systemd-devel at lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/systemd-devel/attachments/20160225/c4f2767d/attachment.html>


More information about the systemd-devel mailing list