[systemd-devel] systemd-coredump large memory allocation

Umut Tezduyar umut at tezduyar.com
Wed Apr 24 06:00:49 PDT 2013


Hi,

systemd-coredump allocates 768 mb heap memory to store the core dump. Is it
really the right way?

Commit: 41be2ca14d34b67776240bc67facf341b156b974

768 mb is pretty big in 32 bit address space. I have roughly 1.6 gb between
beginning of the heap and the first shared library so I have enough address
space but is it possible that linker spreads out shared libraries in a way
that systemd-coredump cannot allocate 768 mb anonymous pages even though
there is enough physical memory?

My embedded system has 256 mb ram and /proc/sys/vm/overcommit_memory is
configured as 0. With this configuration, malloc(768MB) is failing and the
only thing I see in the journal is "out of memory". I can change
the /proc/sys/vm/overcommit_memory to 1 and make coredump happy but just to
make coredump happy, I am not sure making a global memory administration
change is the right thing.

Another point I have is, should we not try to collect as much information
as possible instead of current approach, all or nothing. I am thinking it
might be much better to see some core dump information instead of seeing
"systemd-coredump: Out of memory" in the journal for an application that
crashes once in a blue moon.

My propose is reading from stdin in smaller chunks.

Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/systemd-devel/attachments/20130424/2e863c81/attachment.html>


More information about the systemd-devel mailing list