DBus in the kernel?
michael.meeks at novell.com
Tue Jan 5 07:12:18 PST 2010
On Tue, 2010-01-05 at 15:06 +0200, Kimmo Hämäläinen wrote:
> On Mon, 2010-01-04 at 22:55 +0100, ext Lennart Poettering wrote:
> > On Mon, 04.01.10 16:40, Havoc Pennington (havoc.pennington at gmail.com) wrote:
> > > What is the rationale for the bus itself in the kernel? Seems like one
> > > big pain in the ass, and I can't guess the motivation...
> > Primarily three things:
> > 1) Getting rid of the double context switch for each msg. Right now
Soo - there seem to be, primarily a performance rational for this work,
which is all well and good; but have people profiled dbus to find what
is slow ? is it anything to do with the context switches (eg.) :-)
Last I recall, (from the a11y measurements that Rob did), that
like-for-like, d-bus was ~2x slower than ORBit2 for the same, simple,
point-to-point (ie. not via the bus) synchronous call.
If those numbers are still reasonable, then it suggests that the
context switching and pipe thrashing (laughably assuming ORBit2 is 100%
efficient and thus is pure kernel overhead) can only possibly be ~50% of
the cost, and that by doubling the speed of that we can maximally become
Is it really worth re-writing the bus daemon, inside the kernel to get
a maximum of a 25% win ? :-) [ or are my numbers obsolete ]. Could we
not perhaps get a larger win from optimising _dbus_string_validate_utf8
further for ASCII, or (indeed) avoiding calling it in many more cases ?
(but of course, I havn't profiled recently either ;-).
Finally, if the daemon is re-written to put in the kernel, hopefully
it'll be a BSD licensed blob (?) that can be re-used in those other
Unixen, such that we don't have to maintain the daemon indefinitely in
parallel ? :-)
michael.meeks at novell.com <><, Pseudo Engineer, itinerant idiot
More information about the dbus