<div dir="ltr"><div dir="ltr"></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Hmm? Hard requirement of what? Not following?<br>
<br></blockquote><div><br></div><div>The hard requirement that my project has is that processes need to live even if the daemon who forked them dies.<br></div><div>Roughly it is how a batch scheduler works: one controller sends a request to my daemon for launching a process in the name of a user, my daemon forks-exec it. At some point my daemon can be stopped, restarted, upgraded, whatever but the forked processes need to always be alive because they are continuing their work. We are talking here about the HPC world.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
You are leaving processes around when your service dies/restarts?<br></blockquote><div><br></div><div>Yes.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
That's a bad idea typically, and a generally a hack: the unit should<br>
probably be split up differently, i.e. the processes that shall stick<br>
around on restart should probably be in their own unit, i.e. another<br>
service or scope unit.<br></blockquote><div><br></div><div>So, if I understand it correctly you are suggesting that every forked process must be started through a new systemd unit?<br>If that's the case it seems inconvenient because we're talking about a job scheduler where sometimes may have thousands of forked processes executed quickly, and where performance is key.<br>Having to manage a unit per each process will probably not work in this situation in terms of performance.<br></div><div><br></div><div>The other option I can imagine is to start a new unit from my daemon of Type=forking, which remains forever until I decide to clean it up even if it doesn't have any process inside.</div><div>Then I could put my processes in the associated cgroup instead of inside the main daemon cgroup. Would that make sense?<br></div><div></div><div><br></div><div>The issue here is that for creating the new unit I'd need my daemon to depend on systemd libraries, or to do some fork-exec using systemd commands and parsing output.</div><div>I am trying to keep the dependencies at a minimum and I'd love to have an alternative.<br></div><div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
That's not supported. You may only create your own cgroups where you<br>
turned on delegation, otherwise all bets are off. If you put stuff in<br>
/sys/fs/cgroup/user-stuff its as if you placed stuff in systemd's<br>
"-.slice" without telling it so, and things will break sooner or<br>
later, and often in non-obvious ways.<br></blockquote><div><br></div><div>Yeah, I know and understand it is not supported, but I am more interested in the technical part of how things would break.</div><div>I see in systemd/src/core/cgroup.c that it often differentiates a cgroup with delegation with one without it (!unit_cgroup_delegate(u)), but it's hard for me to find out how or where this exactly will mess up with any cgroup created outside of systemd. I'd appreciate it if you can give me some light on why/when/where things will break in practice, or just an example?<br></div><div><br></div><div>I am also aware of the single-writer policy that systemd has in its documentation, and I am aware that this is not supported, but I'd like to understand exactly what can happen.</div><div><br></div><div><br></div><div>Thanks for your help & time :)<br></div><div><br></div><div><br></div></div></div>