Starting the kdbus discussions

Lennart Poettering mzqohf at 0pointer.de
Fri Jan 17 12:06:07 PST 2014


On Fri, 17.01.14 19:42, Simon McVittie (simon.mcvittie at collabora.co.uk) wrote:

> If kdbus' sysctl interface is atomic with respect to sending and
> receiving messages, then you've solved the problem of sending messages
> and having them not interleave, but not the problem of getting incoming
> messages to the main-context that was prepared to deal with them.

I don't find this convincing, I must say. as long as the event loop
running this is written in a way that it unlocks the connection object
before going to sleep it is certainly possible to get this all done
without the indirection. 

A thread which wants to do a synchronous message call would simply take
the lock, write the message or as much as it can of that, then unlock,
and poll() on the socket, and when it is writable/resable again, write
the rest, of course only after retaking the lock. It does this as long
as the message is not fully written or the reply not fully received. As
many threads as you like can do this in parallel. This would mean that
a message one thread incompletely writes could be finished by another
thread and so on. But that's totally OK. Of course, you could run into
thundering herd problems, but given that this is a pretty unlikely case
it shouldn't really matter much. (And you can certainly avoid the
thundering hear issue too, with a second lock).

PulseAudio's client libraries have support for something like that
(though it's quite different, since it does explicit locking, it won't
do that implicitly).

Now, if you care about making sure that specific filters are run only in
specific threads, then that's doable too: every thread which dispatches
the mainloop would just have to locally queue messages not intended for
itself, and trigger the right thread. The next time it would go back
into poll() that other thread is already awake and can take the lock and
dispatch it.

It's not really that hard... 

Note that with libsystemd-bus we explicitly are not thread-safe, though
threads-aware. In contrast to gdbus and libdbus1 we don't want to play
locking games, so what shifted the focus from
one-shared-connection-per-process to
one-shared-connection-per-thread. We believe this suits the global
ordering model of dbus better, and makes our code a lot simpler.

Lennart

-- 
Lennart Poettering, Red Hat


More information about the dbus mailing list