D-Bus optimizations

Havoc Pennington hp at pobox.com
Sat Mar 3 10:00:02 PST 2012


Hi,

On Sat, Mar 3, 2012 at 11:38 AM, Alberto Mardegan
<mardy at users.sourceforge.net> wrote:
> this sounds like an excellent idea. It's actually possible to implement
> such side channels already now (any process can register a D-Bus
> server), I think that what we miss is, as you say, the possibility to
> relax the protocol (optionally skip the validation?) and somehow make it
> easier to setup.

Something you could do now, with the fd-passing support, would be to
open the side channel by passing an fd from the server, maybe. So the
server doesn't even need to listen on an extra socket perhaps.

You could skip the feature negotiation and authentication then (since
they were already done), maybe disable validation, etc. You could also
have concepts like a "read only" channel (sends signals but doesn't
receive anything). I don't know what's useful exactly.

> However, if each service (or framework) created its own D-Bus server,
> the number of connections per client might become a problem.

It would have to be done only when it was "worth it"

Might also look at exactly how the daemon bogs down if you haven't;
was it writing to all clients before context switching away to any of
them, or is there a context switch for every write? Was the crappy
dbus-mainloop.c causing a problem, what about a really nice and fast
mainloop like libev? Should the bus daemon use threads in some way to
ensure the serialized bottleneck is minimized? Maybe there's some
garbage algorithm in the bus that takes a lot longer than it ought to
if the message queues get long or number of clients gets high?

Another line of thought, in principle the bus daemon is the same
architecture / pattern as the X server (lots of interacting clients on
one "bus"). So whenever there's an issue with dbus, it's worth asking
what does X do, or why doesn't X have this problem, etc. I think
there's been a lot of wrestling with the X server and scheduler and
priorities before - whatever the "fix" was there, has it been applied
to dbus?

I don't remember for sure but I think the bus used to try to read
exactly one message per read() in order to avoid a memcpy() out of the
read buffer; maybe for context-switching reasons, it's better to read
a huge buffer, then copy each message out of it, instead of avoiding
that memcpy? Or the avoid-memcpy-using-more-reads (de)optimization
could be only for messages greater than some threshold size?

You could imagine trying to ensure that if the bus has 20 signal
messages and a bunch of clients that should receive all 20, ideally it
does one read() for all 20, then one write() to each client - or one
multicast write() for all clients - to send those 20. This would
involve somehow detecting a sequence of messages with the same
recipient list, and then deciding to batch-write them. I could imagine
that each time a message is queued to send to a client, try to save
some sort of "this goes to this set of connections" information, and
then when flushing the client queue you could have an "optimizer" that
would use writev and multicast to write as many buffers to as many
clients as possible at once. I don't know if you could get anywhere
here.

Throwing out ideas is all

Havoc


More information about the dbus mailing list