[avahi] Resolving many ( > 200 ) items seems to lead to apparent congestion in the dbus

Daniel Wynne daniel.wynne at mobotix.com
Thu May 28 04:43:20 PDT 2009


Lennart Poettering schrieb:
> On Wed, 27.05.09 09:49, Daniel Wynne (daniel.wynne at mobotix.com) wrote:
>
>   
>>>> Though its not the standard case, we deal with more than 250 hosts for 
>>>> testing purpose. We want to resolve all of them, either without spamming 
>>>> the DBus-System or crash into any Avahi-limitation.
>>>> First, we tried resolving everything right on the way, but this lead to 
>>>> apparent DBus-congestion, which could only be solved by restarting the 
>>>> whole service. After that we tried a queued system which allows only a 
>>>> few resolvers to be coexistent, but this did not lead to any perceptible 
>>>> improvement. The DBus still seems to be congested after a short
>>>> while. 
>>>>         
>>> D-Bus "congestion"? I don't think that exists. What exactly makes you
>>> think D-Bus could be 'congested'?
>>>       
>> So this was a misunderstanding, sorry. In a previous thread somebody
>> mentioned that every D-Bus client can have at maximum 250 objects. I
>> assumed that every resolver browser is a client's object. If this is
>> correct, the limitation is forced by Avahi and not the D-Bus system.
>>     
>
> Yes. That is true. For security reasons Avahi enforces limits on all
> resources a local or remote client can allocate and control. 
>
>   
>>> This might have to do something with the internal limits Avahi applies
>>> on almost everything: in this case possibly the size of the chache?
>>>
>>> Also note that if you issue a lot of requests the local IP stack
>>> packet queueing might already drop packets. Lost packets will most
>>> likely result in timeouts.
>>>       
>> Is there an easy way to verify this? Could not find any proper
>> logfile.
>>     
>
> Edit avahi-core.c and the function avahi_cache_update(). Look for these
> line: 
>
> if (c->n_entries >= AVAHI_CACHE_ENTRIES_MAX)
>    return;
>
> Then add a log line there that prints something when an entry is not
> added to the cache because it reached its max size. i.e.
>
> if (c->n_entries >= AVAHI_CACHE_ENTRIES_MAX) {
>     avahi_log_debug("cache: overrun");
>     return;
> }
>
> or something like that. Then run avahi-daemon --debug to get a peek on
> the debug output.
>
>   
>>> The max cache size is controlled via AVAHI_CACHE_ENTRIES_MAX in
>>> avahi-core/cache. It is set to 500. Given the number of hosts this
>>> might actually be way to low for your use case. Try to increase it to
>>> 5000.
>>>       
>> This solution is not really applicable since it requires recompiling
>> the sources.
>>     
>
> It's free software. Sources available. 
>
>   
Since we do not want to recompile Avahi for every host using our 
software, this solution is not applicable for us.

>> Is there a way to cleanly free the browser resolvers immediately?
>>     
>
> No.
>
> But this 'recycling' of resolves/browsers shouldn't hurt in your case.
>
>   
But I think thats exactly the problem in our case. In our Testnetwork 
reside about 250 cameras we want to find and resolve via Avahi. So the 
cache is way big enough. But if every resolver created is hold for 
possible recycling and not deleted, the number of devices resolved is 
limited to about 100 since we have to create two resolvers for each one 
( see previous thread about multiple IP address resolving ). This fits 
exactly to the behaviour we monitor. Seems like we are locked out with 
the current Avahi version ? Is there a chance that some restrictions to 
Avahi, implemented to avoid misuse through DoS attacks, will be loosen 
due to usability issues? Many programmers and users have probs with 
those as many previous threads show.
> Lennart
>
>   



More information about the avahi mailing list