DBus watches: DBusAddWatchFunction, unclear how to handle multiple watches

Wiebe Cazemier wiebe at halfgaar.net
Thu May 4 11:22:02 UTC 2023


----- Original Message -----
> From: "Simon McVittie" <smcv at collabora.com>
> To: "Wiebe Cazemier" <wiebe at halfgaar.net>
> Cc: dbus at lists.freedesktop.org
> Sent: Thursday, 4 May, 2023 20:28:05
> Subject: Re: DBus watches: DBusAddWatchFunction, unclear how to handle multiple watches

> On Wed, 03 May 2023 at 08:03:20 +0200, Wiebe Cazemier wrote:
>> But I have some questions about 'dbus_connection_set_watch_functions()'
>> What I find weird, is that I get *two* calls to
>> 'DBusAddWatchFunction', for the same fd, but with different DBusWatch
>> objects: one with flags set for reading (active) and one for flags set
>> for writing (inactive). I'm a bit confused how I'm supposed to deal with
>> this, especially in combination with epoll (which gives you the one fd
>> and all the flags at once, which ever apply).
> 
> For some correct example code, please see the epoll and poll watch/timeout
> handlers in the dbus source code (these are not provided as library API
> because they aren't extensible or thread-safe, but they're sufficient
> for dbus-daemon, which is single-threaded), or the implementations of
> watch/timeout/wakeup handling with GLib in various places (dbus-glib,
> dbus-python, Avahi).
> 
> The short version is that libdbus does not guarantee that it will only
> have one watch per fd, so lower-level code needs to be prepared to:
> watch the fd for the union of all the flags of multiple watches; when
> it is reported as readable, trigger the callbacks of all read-only or
> rw watches; and when it is reported as writable, trigger the callbacks
> of all write-only or rw watches.
> 
> For some lower-level APIs like poll() (which is what libdbus was
> originally designed for), it's OK to have multiple watches per fd and
> the kernel or C library will handle the multiplexing for us.
> 
> For other lower-level APIs like epoll and select() that conceptually
> have a map { int fd => DBusWatchFlags flags }, the entry in the map for
> each fd must have its flags set to (watch1.flags | watch2.flags | ...)
> where watch1, watch2, ... are all the active watches for that fd, and
> the watch-handling code is responsible for demultiplexing a result like
> "fd 3 is now ready for writing" into calls to the callbacks for all
> watches that are interested in writing fd 3.

Alright, I basically have that now. I'll fine-tune it some to be able to handle it being one watch as well.

> 
>> When it comes to the write watch, so far I've never actually seen
>> it used. I expected it to, because the documentation says: "When you
>> use dbus_connection_send() or one of its variants to send a message,
>> the message is added to the outgoing queue. It's actually written to
>> the network later".
> 
> As currently implemented, there is an optimization that tries to send
> data immediately if there is enough space in the kernel's socket buffer,
> up to some arbitrary limit on the amount of data written in one batch
> (I think it's somewhere in the range KiB to MiB). This reduces the number
> of syscalls by avoiding an unnecessary poll() or equivalent in the common
> case where the socket buffer is not already full; the trade-off is that
> if the buffer *is* already full, it tries to do a sendmsg() that will fail.
> 
> If there is already enough space in the socket buffer (small or infrequent
> outgoing messages), then libdbus will just queue it in the socket buffer
> immediately, and will never bother to add the write watch.
> 
> If there is *not* enough space (large or frequent outgoing messages),
> sendmsg() or equivalent will fail with EAGAIN or EWOULDBLOCK, and libdbus
> will respond by adding the write watch. When the kernel has drained some
> data from the socket buffer, the write watch's callback will report
> that the socket has become writable, and libdbus will resume writing,
> until it hits either EAGAIN, EWOULDBLOCK, its arbitrary limit on bytes
> per iteration, or the end of the data that has been queued to be sent.
> When its outgoing queue becomes empty, it will disable or remove the
> write watch to avoid unnecessary wakeups.

When using poll this may be faster, but when using epoll, this will make it slower, I think. Not to go into detail about what it is, but I wrote an application that can handle some ten times more application frames over TCP than other servers. One of the factors that contributes, is that once a frame has been put in a client buffer, epoll_ctl is called and the fd is registered as EPOLLOUT. If you put more frames in the buffer before the event loop has the chance to write that buffer out, no more epoll_ctl calls are done.

It has the benefit of writing potentially hundreds of frames using a single write() call on the socket. Not only is that fast, it also puts as much data in TCP packets as possible. This in turn saves a big load on the other end as well, because it saves ready-bouncing file descriptors on the other end.

> 
>> I'm using 1.12.20-r0. This is beyond my control.
> 
> This version is susceptible to several denial-of-service security
> vulnerabilities, as well as some known non-security bugs, so please ask
> whoever is requiring you to use this version what their strategy is for
> solving those.

Thanks, I'll bring it up.

> 
>     smcv


More information about the dbus mailing list