About performance of D-Bus
Havoc Pennington
hp at redhat.com
Thu Oct 25 11:29:34 PDT 2007
Hi,
As Rob says, you aren't comparing the same things. In one case you're
comparing time to receive data, and in the other time to round-trip data.
There used to be code in the tree to do profiling, I think it was lost
when dbus-glib was split out (may still be in the dbus-glib code tree):
http://lists.freedesktop.org/pipermail/dbus/2004-November/001779.html
The results in that message no longer apply since so much was rewritten,
see
http://log.ometer.com/2006-07.html#21
Some key points:
- in real-world applications, you should usually worry a lot more
about round trips than about bandwidth, unless your app involves
shoveling truly large hunks of data. i.e. don't block on every message
and your performance will be much better. You want to send a lot of
messages without waiting for replies to any of them.
- to profile libdbus and dbus-daemon, you must disable checks,
assertions, verbose mode, and tests, or you will get much slower numbers
than otherwise
- it also makes a difference whether you init threads
- it is essential to use wall-clock-type profilers, i.e. valgrind-type
profilers that measure CPU cycles give extremely misleading results
(because so much of the time used is in IO)
- the 2.5x number is prior to the marshaling rewrite where we added
recursive types, the current implementation is slower than that, I don't
remember by how much
- some of the slowness in the current implementation is in paranoid
validation; it is possible to turn this off by editing the libdbus
source, look at dbus-message.c:load_message(), where it sets
mode=DATA_IS_UNTRUSTED. To compare apples-to-apples to an implementation
that does not validate, you would need to change the mode to
WE_TRUST_THIS_DATA_ABSOLUTELY. In the "data is trusted" mode, more
optimizations are probably possible, too. In an appropriate embedded
environment you could probably ship with the "trusted peer" mode enabled.
It would be nice if someone worked on dbus performance and cooked up
some patches, though I do think for a whole desktop like GNOME or KDE,
dbus is probably 2% of overall desktop performance, so even a 50% dbus
speedup is only a 1% user-visible speedup. For non-desktop contexts,
dbus performance could be a much bigger issue.
For overall desktop performance an implicit assumption of dbus is that
supporting async, round-trip-avoiding operation is much more important
than raw speed.
An implementation that required blocking for example could be faster
since it would avoid the copy into DBusMessage - it could simply block
instead of buffering/queuing.
Havoc
More information about the dbus
mailing list