About performance of D-Bus
Alberto Mardegan
mardy at users.sourceforge.net
Fri Oct 31 08:23:16 PDT 2008
ext Havoc Pennington wrote:
> Having all the communication done directly would be optimizing for the
> wrong thing; in a typical desktop session usage, the 2x slower does
> not matter, but having a ton of extra file descriptors and complexity
> as every app connects to every other app would matter a lot. If you
> kept the same semantics as the current bus, a "swarm" architecture
> might have enough overhead to be slow in its own right, as well, due
> to some sort of complex bookkeeping, not even sure how it would work.
Yes, it's true that in the general case the number of descriptors could grow
exponentially with the number of components involved, so in general this might
not be a good idea.
> That said, if you have a specific use-case where for example you are
> moving a large amount of data, it is encouraged to use the bus to set
> up a 1-1 connection. libdbus does support 1-1 connections (create a
> DBusServer to listen on a socket in one process, then connect to it
> from another, sending the server address over the bus).
Yes, but this requires that the components are explicitly designed to work like
this.
> But, this extra out-of-band channel is rarely needed; the bus is
> perfectly fast enough for all the typical stuff you might do in a
> desktop session.
I was not especially thinking of the desktop, but rather to the Maemo platform
and Telepathy specifically: for instance, telepathy connection managers
typically communicate with 1-2 processes only; they are not designed to use 1-1
communications, but if libdbus could do some trick under the hood, they might
benefit from it.
A more refined implementation could make it so that the bus daemon adjusts the
dispatching mechanism (switching from centralized to 1-1 and vice-versa)
according to the number of clients acting on an object. It might be overly
difficult to implement, but if all this meta-information is delivered along with
existing D-Bus method calls/signals, it shouldn't harm.
> Before optimizing dbus the first question is always "how much of my
> overall user experience time is spent in dbus?" and at least for the
> intended uses of dbus (desktop sessions, systemwide bus) the answer
> has usually been "not much" which (I think) is why nobody has spent a
> lot of time sending in optimization patches. Making dbus take up 1% of
> your user experience instead of 2% is a huge dbus speedup that would
> be a lot of work, but nobody would ever notice.
Yes, that's true. What I'm more concerned about is those (rare) moments where
many messages are exchanged at the same time; for instance, when the network
connection goes online the Telepathy connection managers must deliver
information on the online contacts, their avatars, a few GetAll methods are
called on the Telepathy connections and channels interfaces to retrieve their
properties. This might have some sensible impact on the user experience.
Another thing that might be improved with 1-1 communications, is that some
limits could be removed altogether: for instance the number of active call at a
time, and the reply timeout; they could still be set on the bus daemon, but the
clients involved in a 1-1 communication can do without it.
> Another thing to keep in mind is that if your app is doing round trips
> (blocking on one message reply before sending the next message) that
> is going to kill you, vs. sending batches of messages and then getting
> all their replies later. One of the design goals of dbus was to allow
> you to avoid these blocking round trips. If you force each message to
> have a reply before sending the next one, then that is quite a lot
> slower. Often this is an API design issue, for example, avoid APIs
> where you have to get thing A, then from A get B, then from B get C,
> because you'll have to wait for each get to return before calling the
> next one. A better design, for example, would return A+B+C all at once
> from a single method call. That will be much more than 3x faster,
> because 3 blocking round trips is much slower than 3 "pipelined"
> method calls. It is the same principle as http pipelining:
> http://www.mozilla.org/projects/netlib/http/pipelining-faq.html
Yes, that's the first step of optimization. I'm thinking of a well-behaving
D-Bus API (such as Telepathy), which though meant to be accessible by many
different processes, in most of the times could benefit from a 1-1 communication.
Anyway, my point was to know, given a fixed and immutable API using D-Bus, what
are the things that could be done to improve the performances, without heavily
modifying the clients using the API. But I'm not saying that D-Bus is the
bottleneck.
Ciao,
Alberto
--
http://www.mardy.it <-- Geek in un lingua international!
More information about the dbus
mailing list