"DBus Embedded" - a clean break

Jacques Guillou jacques.guillou at gmail.com
Thu Jan 20 04:27:00 PST 2011


I have run a benchmark on my machine, where I compare dbus with some
other IPC mechanisms. This benchmark involves 2 processes connected to
the session bus : a client and a server. The client keeps making
blocking method calls which contain a byte array as parameter. The
server handles the call by umarshalling the byte array and sending a
reply, back to the client, containing the same byte array.

What I notice is that there seems to be 2 separate problems.
- The daemon is slow. On my dual-core machine, the CPU usage of the
dameon process is about 70%, whereas the client and server processes
both use about 30%.
- The libdbus implementation is not efficient.

Since libdbus is just one binding among many others, and since there
seems to be no fundamental problem with the dbus wire protocol itself,
I assume there are (or there could be) some bindings which offer
satisfying performance. So I assume using an efficient binding and
communicating in peer-to-peer mode (without involving a daemon), would
give good results.
So my question : Wouldn't it make sense to try to get rid of the
daemon in the communication as soon as possible ? The daemon would
basically allow a client to retrieve a direct communication channel to
the server it wants to communicate with. Once this channel is
obtained, the daemon is not involved anymore, and the client and
server communicate together directly, still using the dbus wire
protocol. I know there's already this new support for file descriptor
passing in DBUS, but, as far as I know, nothing currently defines how
to use the retrieved file descriptor. My proposal would be to reuse a
similar FD-passing mechanism to establish a new dedicated DBUS
connection between two processes.
This approach would have the advantage of keeping the wire protocol as
it is now, thus allowing bindings in any language as it is now. I
think changing the transport mechanism to shared memory, FIFO or
message queues would involves much more changes in existing bindings.

Any comment ?


On Thu, Jan 20, 2011 at 10:28 AM, Alberto Mardegan
<mardy at users.sourceforge.net> wrote:
> Hi Ville,
> On 01/20/2011 09:53 AM, Ville M. Vainio wrote:
>> - It could be much (10x?) faster - switch to shared memory, posix
>> message queues for data transmission (references to relevant shared
>> memory blocks), do not do any verification
> Agree with all the above, except about usage of POSIX message queues: they
> are a limited resource, and if all we are doing is passing a handle to a
> shared memory area, a FIFO might just be as good.
> [...]
>> Thoughts?
> In a small fraction of my already limited free time, I'm studying how to
> implement a similar monster. See:
> http://lists.freedesktop.org/archives/dbus/2011-January/013964.html
> All the points you wrote above would hold. And every client could specify
> what transport it desires for the data channel; for instance:
> - FIFO + shm
> - POSIX message queues + shm (for real time apps, if POSIX queues are
> actually faster than FIFOs)
> - socket (for slower but more secure communication)
> Ciao,
>  Alberto
> --
> http://blog.mardy.it <-- geek in un lingua international!
> _______________________________________________
> dbus mailing list
> dbus at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dbus

More information about the dbus mailing list