[avahi] accept(): Too many open files

Lennart Poettering lennart at poettering.net
Tue Mar 25 15:46:13 PDT 2008


On Tue, 25.03.08 19:36, Aldrin Martoq (amartoq at dcc.uchile.cl) wrote:

> > Hmm, maybe a couple of other daemons try to connect to Avahi at the
> > same time, and Avahi is configured to have only 30 fds around at the
> > same time? Does any of the other daemons running that might access
> > Avahi (cups, apache, ...) report a failure when contacting Avahi?
> 
> Anyhow, this sound like a possible denial of service attack... don't you
> think?

Hehe. Certainly. Every machine with limited resources is vulnerable to
DoS. And, right now, there are no unlimited resource machines. All of
them have only a limited CPU, a limited RAM size and a limited HDD
size. 

Every reasonable program enforces resource limits
everywhere. Actually, programs which do not do so are the buggy ones.

The idea of enforcing resource limits is that -- when a flooding with
requests happens -- a reasonable number of those requests are still
processed properly -- without having the machine to come to a
halt entirely. 

And hence, what Avahi does here is exactly what it should be
doing. The only thing that might be worth discussing is how to choose
the default value for the fd rlimit.

Lennart

-- 
Lennart Poettering                        Red Hat, Inc.
lennart [at] poettering [dot] net         ICQ# 11060553
http://0pointer.net/lennart/           GnuPG 0x1A015CC4


More information about the avahi mailing list