[systemd-devel] Starting configurable set of services first

WaLyong Cho walyong.cho at samsung.com
Wed Nov 19 06:46:45 PST 2014


On 10/28/2014 01:06 AM, Umut Tezduyar Lindskog wrote:
> On Wed, Oct 22, 2014 at 7:44 PM, Lennart Poettering
> <lennart at poettering.net> wrote:
>> On Tue, 02.09.14 10:06, Umut Tezduyar Lindskog (umut at tezduyar.com) wrote:
>>
>>> Hi,
>>>
>>> I would like to start a configurable set of services first and the
>>> services are wanted by multi-user.target. I am using a service to jump
>>> to multi-user.target and I was wondering if we can support this use
>>> case natively by systemd.
>>>
>>> multi-user.target.wants
>>>   A.service
>>>   B.service
>>>   C.service
>>>   D.service
>>>
>>> default.target > stage.target
>>> stage.target.wants (These are set by generator)
>>>   A.service
>>>   C.service
>>>   switcher.service
>>>
>>> switcher.service (This is generated by generator)
>>>   [Unit]
>>>   Description=Switch to multi-user.targe
>>>   After=A.service C.service
>>>   [Service]
>>>   Type=oneshot
>>>   RemainAfterExit=yes
>>>   ExecStart=/usr/bin/systemctl --no-block start multi-user.target
>>>
>>> This way I am jumping from one target to another target during runtime.
>>>
>>> - What stage.target wants is dynamic. If it was static, my job would
>>> have been very simple.
>>> - I am aware of StartupCPUShares but it is not the ultimate solution
>>> A) there is a configurable minimum quota in CFS which still gives CPU
>>> to other processes. B) We still fork other processes and this causes
>>> changes in timeout values of other processes.
>>> - Adding dynamically After= to B and D service files is not the
>>> ultimate solution either because B and D might be socket/dbus
>>> activated by A or C.
>>>
>>> Should this be something we should support natively by systemd?
>>
>> As discussed at th systemd hackfest: I am a bit conservative about
>> this as it introduces plenty chance for deadlocks, where services
>> might trigger/request some other unit but we'd delay it until the
>> later stage...
>>
>> I think the implementation you chose is actually pretty good. I am not
>> sure though that we should do this upstream. I mean, I really would
>> prefer if we'd dump as much work as possible on the IO elevator and
>> CPU scheduler, and then adjust the priorities of it to give hints what
>> matters more. Trying to second-guess the elevator and scheduler in
>> userspace feels a bit like chickening out to me, even though I am sure
>> that it might be something that one has to do for now, in the real
>> world...
> 
> I am not agreeing on this. Once you fork the process, it will always
> get some CPU even though you play with cpu.shares, sched_latency_ns,
> sched_min_granularity_ns. My goal is not forking it at all until high
> priority services are activated. Just like Before=, After=.
> 
I have similar problem with this. And I'd introduced extra dependencies.
But Lennard said same like. :)

http://lists.freedesktop.org/archives/systemd-devel/2014-February/017457.html
http://lists.freedesktop.org/archives/systemd-devel/2014-March/017524.html

So I just keep that as our downstream patch. (Some was enhanced. Using
hash to avoid compare line by line. just trivial. basic concept is same.)
The summary is..
I add an option and named that default extra dependency. (I couldn't
imagine any abbreviation.) And make if service was not listed on ignore
extra dependency and has no DefaultDependencies=no option then that
service is started after all of default extra dependency units. But we
also still have many After=/Before= options in many unit files. :)
I agree about Lennard's thought. I can easily break a ordering cycle or
make circular dependencies. But we don't have other option.
In our system, cpu usage go up to 100% almost just after systemd start
dispatching from job queue. Until default.target is activated.
If new unit come up to the race round then that will make more and more
slower.

I hope we can find some of general way to resolve.

WaLyong

>>
>> There's one change I'd really like to see done though in systemd, that
>> might make things nicer for you. Currently, it's undefined in systemd
>> which job is dispatched first, if multiple jobs can be executed. That
>> means when we are about to fork off a number of processes there's no
>> way to control which one gets forked off first. I'd be willing to
>> merge a patch that turns this into a prioq, so that some priority
>> value can be configured (or automatically derived) for each unit, and
>> the one with the highest priority would win, and be processed
>> first. This would not provide you with everything what you want, but
>> would make things a bit nicer when we dump all possible work on the
>> scheduler/elevator, because after all we cannot really dump all work
>> at the same time, and hence should at least give you control in which
>> order to dump it, if you follow what I mean.
> 
> I have understood your propose with the exception of one thing. When
> do we start dispatching low priority jobs? When the high priority jobs
> are dispatched/forked or when the high priority jobs are
> dispatched/activated?
> 
> Umut
> 
>>
>> Lennart
>>
>> --
>> Lennart Poettering, Red Hat
> _______________________________________________
> systemd-devel mailing list
> systemd-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> 


More information about the systemd-devel mailing list