D-Bus optimizations

Havoc Pennington hp at pobox.com
Fri Mar 2 08:28:21 PST 2012


Some drive-by thoughts, feel free to ignore...

* One high-level question here is which apps / usages need the
performance gains; specifically where is this mattering on an actual
desktop or mobile environment? What's the concrete use-case you're
trying to fix?

i.e. "dbus 2x faster" doesn't mean much, the question is which overall
user-visible thing is faster, and by how much for the entire thing,
not just the usage of dbus.

For example when Colin suggests doing multicast only for session bus,
are the things you care about even using the session bus?

Are there 20 usages that need to be faster and specifically bottleneck
on dbus, or only 5 that could maybe be optimized in other ways like
changing how dbus is used?

Are there plain bugs like apps subscribing to signals they don't care about?

* The original premise of dbus was that if max performance of IPC was
a concern you would not use dbus. Even at 2x faster, it isn't going to
be "fast". There's a risk of getting (even more) complex to support
usages that are just not appropriate...

I mean, it's not like the cost of the daemon was ever unknown... it
was considered worth it for simpler implementation, security policies,
reliable delivery, guaranteed-valid sender field, etc. Is it no longer
worth it, or is it that there are no tradeoffs to using multicast
anymore? Even if you add multicast, other aspects of dbus are still
not exactly designed with efficiency as #1 consideration... maybe
there's some other IPC tech out there that should be integrated with
dbus as a complement?
dbus_connection_negotiate_blazing_dedicated_sidechannel() ;-)

* I guess to understand whether there are tradeoffs, it seems like
it'd be important to develop patches to the spec and the man pages
(especially dbus-daemon.1) describing the multicast stuff. If you only
change the code, it's going to be very easy to overlook important
issues. It's also going to be very hard for the various dbus
reimplementations to figure out wtf is going on. Will need to describe
how the new feature negotiation works, how things like AddMatch
change, how the dbus-daemon config changes, etc.

* fwiw I think getting the semantics compatible and maintaining both
codepaths with feature negotiation is going to be... fun. And almost
certainly broken unless it has extensive test coverage. It's going to
be a lot of work to do well. If making things faster is "free" that's
one thing but if it's hard like this... needs careful thought. If I
were Simon there's no way I'd land this without test coverage of
feature negotiation and behavior of key stuff on both the old and new
codepaths....

* Have you considered multicasting from the daemon to N clients, but
not multicasting directly from a client to N clients? Not sure how
much that reduces context switching (I don't know if currently the
scheduler lets the daemon do a bunch of write() before it switches
away or if it switches to the receiving clients over and over in
between daemon writes). It would certainly reduce memcpy and syscalls
though, and wouldn't require the same sacrifices to security policies
and validation.

Here again the exact usages matter, because it matters whether they
are usually signalling a lot of clients or just one or two...

* Another idea is to make it much easier (via convenience API and
standardized approach) to set up direct connections... for example if
a certain signal is likely to be high-volume, maybe the sender of that
signal acts as its own daemon, and to "subscribe" you just connect
directly to that sender... you'd locate the sender and get its address
using the system bus, and then connect to it directly... again maybe
with nice convenience API.

Havoc


More information about the dbus mailing list