<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hi Norbert,</p>
<p>thanks for also having an eye on this - I am looking for the
failure reports on ci.libreoffice.org currently, too.<br>
Last is from
<a class="moz-txt-link-freetext" href="http://ci.libreoffice.org/job/lo_tb_master_linux_dbg/7195/">http://ci.libreoffice.org/job/lo_tb_master_linux_dbg/7195/</a>, so
last is from Friday, 13th (uhhh...)</p>
<p>Have you seen such or similar stacks anywhere else? In the
meantime I tried ChartTest massively locally on Linux and Win, but
could never locally reproduce.</p>
<p>The SolarMutex thing is sure not good, but only a symtom showing
up I would guess. There are tests and codes in SC e.g. that also
use massive parallelism, not limited to an upper core count. The
basic problem is that the MainThread always holds the SolarMutex,
so also during calling waitUntilEmpty(). The consequence is that
no WorkerThread is allowed to get the SolarMutex, limiting
multithreaded actions to this.</p>
<p>I knew that and made sure that the multithreaded 3DRenderer
WorkerThreads do not need the SolarMutex for their work. I did not
know yet that the memory fail handler tries to get the SolarMutex,
too, but is logic when it wants to bring up a dialog in some form.</p>
<p>But the deeper problem is that allocation - here extending a
vector of pointers to a helper class from 1 to 2 entries - fails.
Sometimes. And that only on many cores on that machine (up to
now).</p>
<p>I checked all involved classes, their refcounting and that the
used o3tl::cow_wrapper uses the ThreadSafeRefCountingPolicy, looks
good so far. It is also not the case that the WorkerThreads need
massive amounts of own memory, so I doublt that limiting to e.g. 8
thredads would change this, except maybe making it less probable
to happen. I looked at o3tl::cow_wrapper itself, and the basic
B2D/B3DPrimitive implementations which internally use a
comphelper::OBaseMutex e.g. for creating buffered decompositions.</p>
<p>I found no concrete reason until now, any tipps/help much
appreciated.</p>
<p>I keep watching this - at least it did not happen in all the
builds since 13th and on no other machine, so the thread now is to
somehow nail it to get it reproducable. If someone has other
traces, please send them! I would hate to take this back, esp.
because we will need multithreading more and more since Moore's
law is tilting.<br>
</p>
<p>Sincerely,</p>
<p>Armin<br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">Am 17.05.2016 um 14:35 schrieb Norbert
Thiebaud:<br>
</div>
<blockquote
cite="mid:CAFWMYEH+THxpeHzeWEbF_gbKKN1tcNOUmWFaafgVzGMHyf29sA@mail.gmail.com"
type="cite">
<pre wrap="">On Tue, May 17, 2016 at 6:44 AM, Thorsten Behrens <a class="moz-txt-link-rfc2396E" href="mailto:thb@libreoffice.org"><thb@libreoffice.org></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Norbert Thiebaud wrote:
</pre>
<blockquote type="cite">
<pre wrap="">The threaded work then raise() due to some memory problem and out
signal handler try to acquire the solar mutex ->deadlock
</pre>
</blockquote>
<pre wrap="">Eek, that's ugly. Then again, at the core is the OOM condition, which
needs solving independently. Per chance, is that happening on a box
with massive amounts of CPU threads?
</pre>
</blockquote>
<pre wrap="">
it is on the ci builder, so yeah 32 thread or so.
but I disagree that it is _at the core_
at the core this exhibit 2 things:
1/ we do a lot of thing that is verboten in a signal handler.
2/ taking a lock that rely on other thread to move forward while
holding the solarmutex is begging for deadlock.
Norbert
</pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
--
ALG (PGP Key: EE1C 4B3F E751 D8BC C485 DEC1 3C59 F953 D81C F4A2)</pre>
</body>
</html>