d-bus vs other interprocess communication system

Havoc Pennington hp at pobox.com
Mon Jun 9 07:13:42 PDT 2008


Hi,

On Mon, Jun 9, 2008 at 2:46 AM, Alexander Neundorf
<neundorf at eit.uni-kl.de> wrote:
> How about low-throughput low-latency transfers, i.e. a lot of very small
> messages ?
> If you think it's not appropriate, why ? Where do you see potential
> bottlenecks ?

The "inherent" cost of the dbus design is the bus daemon; because each
message goes A -> daemon -> B -> daemon -> A, that is inherently
double the number of transfers as A ->B -> A. However, for the cases
dbus was designed for, this extra cost in CPU is worth it to avoid the
extra cost in connection setup and holding tons of inter-app
connections open. (With the daemon you get a "hub with spokes" or star
topology instead of a big graph.) The daemon also provides robust
lifecycle tracking of apps (ability to know when services come and
go), and broadcast of signals.

The other cost is that the libdbus implementation (vs. other possible
implementations) is much more about paranoia and flexibility than it
is about raw speed; it could be faster. This is not "Inherent" to the
protocol or design though, just how libdbus was done.

If you want low-throughput low-latency, the key, as with X protocol,
is to avoid round trips. If you send a stream of messages, it will be
much much faster if you send them all, before you wait for replies to
any of them. If you send, wait for reply, send, wait for reply, send,
wait for reply - that will be much slower than if you send, send,
send, send, wait for all replies. That is the main thing to worry
about when coding dbus apps. Most GUI apps should not be blocking
anyway.

Havoc


More information about the dbus mailing list