[systemd-devel] systemd-coredump large memory allocation

Lennart Poettering lennart at poettering.net
Wed Apr 24 07:22:52 PDT 2013


On Wed, 24.04.13 15:00, Umut Tezduyar (umut at tezduyar.com) wrote:

> Hi,
> 
> systemd-coredump allocates 768 mb heap memory to store the core dump. Is it
> really the right way?
> 
> Commit: 41be2ca14d34b67776240bc67facf341b156b974
> 
> 768 mb is pretty big in 32 bit address space. I have roughly 1.6 gb between
> beginning of the heap and the first shared library so I have enough address
> space but is it possible that linker spreads out shared libraries in a way
> that systemd-coredump cannot allocate 768 mb anonymous pages even though
> there is enough physical memory?
> 
> My embedded system has 256 mb ram and /proc/sys/vm/overcommit_memory is
> configured as 0. With this configuration, malloc(768MB) is failing and the
> only thing I see in the journal is "out of memory". I can change
> the /proc/sys/vm/overcommit_memory to 1 and make coredump happy but just to
> make coredump happy, I am not sure making a global memory administration
> change is the right thing.
> 
> Another point I have is, should we not try to collect as much information
> as possible instead of current approach, all or nothing. I am thinking it
> might be much better to see some core dump information instead of seeing
> "systemd-coredump: Out of memory" in the journal for an application that
> crashes once in a blue moon.
> 
> My propose is reading from stdin in smaller chunks.

Happy to take a patch that turns this into a realloc() loop.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.


More information about the systemd-devel mailing list