[systemd-devel] Problems with systemd-coredump

Kay Sievers kay at vrfy.org
Mon Feb 17 12:46:36 PST 2014

On Mon, Feb 17, 2014 at 9:43 PM, Jan Alexander Steffens
<jan.steffens at gmail.com> wrote:
> On Mon, Feb 17, 2014 at 9:27 PM, Manuel Reimer
> <Manuel.Spam at nurfuerspam.de> wrote:
>> Hello,
>> if a bigger application crashes with coredump, then systemd-coredump seems
>> to have a few problems with that.
>> At first, there is the 767 MB limitation which just "drops" all bigger
>> coredumps.
>> But even below this limit it seems to be impossible to store coredumps. I
>> did a few tries and found out that, with default configuration, the limit
>> seems to be at about 130 MB. Bigger coredumps are just dropped and I cannot
>> find any errors logged to somewhere.
>> It seems to be possible to work around this problem by increasing
>> SystemMaxFileSize to 1000M. With this configuration change, bigger coredumps
>> seem to be possible, but this causes another problem.
>> As soon as a bigger coredump (about 500 MB) is to be stored, the whole
>> system slows down significantly. Seems like storing such big amounts of data
>> takes pretty long and is a very CPU hungry process...
>> Can someone please give some informations on this? Maybe it's a bad idea to
>> store such big amounts of data in the journal? If so, what's the solution?
>> Will journald get improvements in this area?

> I wish there was a good way to install a system debugger which could
> inspect the process and its memory at the time of the crash and
> generate a short textual report, like libSegFault, or a minidump, like
> breakpad. Either hopefully small enough to just chuck into the
> journal.

That is the plan, and someone just needs to finish it:


More information about the systemd-devel mailing list