[avahi] Memory consumption

Frank Lahm franklahm at googlemail.com
Wed Apr 11 08:41:25 PDT 2012


Am 11. April 2012 16:50 schrieb Frank Lahm <franklahm at googlemail.com>:
> Am 10. April 2012 22:26 schrieb Lennart Poettering <lennart at poettering.net>:
>> On Tue, 10.04.12 15:23, Frank Lahm (franklahm at googlemail.com) wrote:
>>
>>> Hi all,
>>>
>>> I'm one of the Netatalk (OpenSource AFP fileserver) devs. We use Avahi
>>> via avahi_threaded_poll and friends in order to register the AFP
>>> server with mdns.
>>> We call the Avahi setup stuff once in our main process `afpd` here
>>> [1]. Handling user sessions is done by forking childs.
>>>
>>> As it seems, when calling the Avahi setup functions, a big chunk of
>>> memory is taken:
>>>
>>> Obviously, we didn't bother calling the Avahi free ressources()
>>> functions in the forked afpd session childs as the damage done by
>>> sbrk() can't be undone by free().
>>>
>>> The main concern I have with this memory consumption is that is handed
>>> down to every afpd process.
>>>
>>> My questions are:
>>> - is the memory consumption of 10 MB as expected ?
>>> - do you see any way of preventing the inheritance of the memory
>>> consumption without running the Avahi registration in an dedicated
>>> process ?
>>
>> I don't see where the Avahi client libraries cuould do such a big
>> allocation. My only guess is that this is actually the D-Bus library
>> that does this (it maintains a message cache). To figure this out it
>> might be worth to plot the memory usage with a tool like valgrind's
>> massif tool?
>
> I had checked with Valgrind memcheck before to no avail. I've looked
> at massif as to your recommendation and that is indeed quite revealing
> (this requires the --pages-as-heap=yes option as the large allocation
> is apparently not done via malloc et al):
>
> $ sudo valgrind --tool=massif --pages-as-heap=yes --detailed-freq=1
> /usr/local/netatalk/sbin/afpd -d
> ...
> ^C
>
> Looking at the last snapshot:
>
> $ ms_print ...
> ...
> --------------------------------------------------------------------------------
>  n        time(i)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
> --------------------------------------------------------------------------------
>  61  1,283,953,675       84,291,584       84,291,584             0            0
> 100.00% (84,291,584B) (page allocation syscalls) mmap/mremap/brk,
> --alloc-fns, etc.
> ...
> ->12.45% (10,493,952B) 0x6B32F99: mmap (in /lib/libc-2.7.so)
> | ->12.44% (10,489,856B) 0x6848AEF: pthread_create@@GLIBC_2.2.5 (in
> /lib/libpthread-2.7.so)
> | | ->12.44% (10,489,856B) 0x5B45E96: avahi_threaded_poll_start (in
> /usr/lib/libavahi-common.so.3.5.0)
> | |   ->12.44% (10,489,856B) 0x40BD5D: av_zeroconf_register (afp_avahi.c:307)
> | |     ->12.44% (10,489,856B) 0x40F0FA: zeroconf_register (afp_zeroconf.c:36)
> | |       ->12.44% (10,489,856B) 0x40C22D: configinit (afp_config.c:128)
> | |         ->12.44% (10,489,856B) 0x42FCD2: main (main.c:338)
> ...
>
> So it seems it's pthread_create(). Wonder why it allocates such a huge
> chunk of memory. I guess the only way avoiding this is completely
> redesigning our application such that the Avahi stuff is not done in
> the process that later forks and runs user AFP sessions. :(

Would it make sense to extend the Avahi API so that a parameter could
be passed which is then used to call pthread_attr_setstacksize() for
the thread?

-f


More information about the avahi mailing list