D-Bus optimizations

Rodrigo Moya rodrigo at gnome-db.org
Tue Feb 28 02:34:06 PST 2012


On Tue, 2012-02-28 at 05:06 -0500, Colin Walters wrote:
> On Mon, 2012-02-27 at 13:52 +0100, Rodrigo Moya wrote:
> 
> > I didn't know about this binding, so I'll try to keep you posted so that
> > you can update it. As I said, the bindings shouldn't need much change,
> > apart from what I mentioned before
> 
> But the issue is more that if they *aren't* updated, they actually
> break, right? We need to at least think about backwards compatibility
> here.  How far can we get with a scheme where this appears as a
> negotiated feature?  Can it work to e.g. send a signal via multicast
> to 3 peers, and also via regular socket sendmsg to another peer 
> simultaneously?
> 
we are thinking on having a proxy that supports old clients, routing all
traffic from the SOCK_STREAM sockets to the multicast groups. Sorry,
forgot to mention it in my blog post

>         Shared memory: this has no proof-of-concept code to look at, but
>         was a (maybe) good idea, as it would mean peers in the bus would
>         use shared memory segments to send messages to each other. But
>         this would mean mostly a rewrite of most of the current D-Bus
>         code, so maybe an option for the future, but not for the short
>         term."
> 
> So now we're saying that there might be ANOTHER transition in the
> future?  At the expected timescales here I really think
> that if we have at least a potentially much better plan, we should
> consider it now.  Consider the cost of refactoring libdbus/gdbus/etc.
> once versus doing it in a smaller way twice.

> If both optimizations are eventually implemented, and we support
> backwards compatibility, we have to consider *all three* schemes
> being in use simultaneously.  Could the bus daemon e.g. route
> a signal to 2 peers via multicast, once via regular sockets,
> and twice via shared memory?
> 
I mentioned the shared memory solution because it was looked at, but
hasn't been taken into account. IIRC, the solution was to have different
shared memory segments, some RW and some read-only, so I think it makes
things too complicated to be taken into account. I might be wrong, but I
have never seen this being suggested in this mailing list.

> In all of this work it's really *really* crucial
> to have some reference benchmark.  What are we trying to optimize?
> This blog post by Alban is good:
> http://alban-apinc.blogspot.com/2011/12/d-bus-traffic-pattern-with-telepathy.html
> 
yes, we have a simple test suite (dbus-ping-pong, as mentioned in
Alban's blog posts) which we are using right now for measuring times.
Although right now, with the current D-Bus branch, there are still some
things to be done before we can certify the improvements. But I have
both VM machines, one with current D-Bus and one with the multicast one,
setup to measure the timings, so as soon as I have some real numbers,
I'll publish them.

Also, once this is done, I have a glib branch which would have all the
needed changes to gdbus, so we will be able to measure the whole desktop
performance with current and the new D-Bus

> But then I don't see any citation later of improved numbers for
> Telepathy.  There are synthetic benchmarks like 
> "10 D-Bus signal delivered to 10 recipients with a high priority
> dbus-daemon" but that doesn't give us a good idea how it
> affects Telepathy.
> 
> It may make sense to investigate different mechanisms for the session
> versus the system bus.  The system bus is designed for IPC between
> security domains and I think makes roughly the right set of
> tradeoffs for that, whereas the session bus could be a lot faster
> if we e.g. stopped validating messages at dbus-daemon level and
> just allowed invalid UTF-8 generated inside telepathy to crash
> empathy.
> 
yes, there are a lot of other improvements that could be done, message validation being one
of them. Note though that with multicast, we remove one of the
validations, as the daemon doesn't do anything on most of the messages
sent to the bus.




More information about the dbus mailing list