[systemd-devel] Problems with systemd-coredump

Kay Sievers kay at vrfy.org
Tue Feb 18 03:06:40 PST 2014


On Tue, Feb 18, 2014 at 11:05 AM, Thomas Bächler <thomas at archlinux.org> wrote:
> Am 17.02.2014 21:27, schrieb Manuel Reimer:
>> As soon as a bigger coredump (about 500 MB) is to be stored, the whole
>> system slows down significantly. Seems like storing such big amounts of
>> data takes pretty long and is a very CPU hungry process...
>
> I completely agree. Since the kernel ignores the maximum coredump size
> when core_pattern is used, a significant amount of time passes whenever
> a larger process crashes, with no benefit (since the dump never gets
> saved anywhere).
>
> This is extremely annoying if processes with sizes in the tens or
> hundreds of gigabytes crash, which sadly happened to me quite a few
> times recently.

It's an incomplete and rather fragile solution the way it works today.
We cannot really *malloc()* the memory for a core dump, it's *pipe*
from the kernel for a reason. It can be as large as the available RAM,
that's why it's limited to the current maximum size, and therefore
also limited in its usefulness.

It really always needs to be extracted to be a minidump to store away.
There are no other sensible options when things should end up in the
journal.

Kay


More information about the systemd-devel mailing list