<div dir="ltr">Changed to infinity and the error seems gone. Will keep monitoring for a few days ..<div><br></div><div>Thanks guys!</div><div><br></div><div>Rao</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 21, 2017 at 3:42 AM, Lennart Poettering <span dir="ltr"><<a href="mailto:lennart@poettering.net" target="_blank">lennart@poettering.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Mon, 20.02.17 16:44, Rao Vz (<a href="mailto:raoatvz@gmail.com">raoatvz@gmail.com</a>) wrote:<br>
<br>
> Hi, Guys<br>
><br>
> We have a Apache Spark cluster of 3 nodes, one is master and slave, the<br>
> other two are slaves. When starting Spark worker with "systemctl start<br>
> spark-worker", when running out apps, sometimes but not always it generates<br>
> "java.lang.OutOfMemoryError: unable to create new native thread" error in<br>
> Spark worker logs.<br>
<br>
</span>I figure the error is misleading and is not about memory at all, and<br>
you need to bump the default TasksMax= field or even turn it off by<br>
setting it to infinity.<br>
<span class="HOEnZb"><font color="#888888"><br>
Lennart<br>
<br>
--<br>
Lennart Poettering, Red Hat<br>
</font></span></blockquote></div><br></div>