outside-dispatch-lock handlers

Havoc Pennington hp at redhat.com
Tue Feb 28 23:01:22 PST 2006


On Tue, 2006-02-28 at 18:54 +0100, Thiago Macieira wrote:
> The semantics for the filter are very different for calls and for signals. 
> For the former, "handled" means "I handled this message". For signals, it 
> means "please hide this message from other filters".

Well, in both cases it means "stop processing the message, don't run
more handlers" - signals and methods aren't treated differently in
dbus_connection_dispatch() (except for the default handler for method
calls that sends UNKNOWN_METHOD)

Intended uses of filters are:
 - for bindings to get signals 
 - debugging, e.g. to print all received method calls or something
 - unforeseen hacks, e.g. dbus-connection.c has a builtin "filter"
   that handles a Ping method, something like libgnome could do 
   things like that

For method calls, the expectation is that simple apps might use a filter
or just a blocking loop with message_pop; but bindings would typically
register a handler for the object paths they had objects underneath.

Right now the object path handlers and the filters return the same
DBusHandlerResult enum; I take your point that the enum's naming is more
appropriate for the object path handlers, and for filters you might
expect something more like:
 enum DBusFilterResult {

And in fact after typing that I went back in cvs history and we used to
typedef enum
  DBUS_HANDLER_RESULT_REMOVE_MESSAGE, /**< Remove this message, no 
                                       further processing. */
                                            handlers that are
                                        interested in this message. */
} DBusHandlerResult;

I think I probably changed it when adding the object tree stuff so
things seemed more method-call-oriented than filter-oriented, but I

> I think that this difference should be documented, or completely removed 
> (i.e., the return value for signals be completely ignored).

Indeed, dbus_connection_dispatch() could use longer docs (and could
stand to be a shorter function for that matter...)

> How about avoiding the new return type and ALWAYS calling the "outside" 
> handler for every registered handler? 
> Going further, how about always calling the handler outside the lock if 
> the message is a signal? Once a signal is dequeued from the incoming 
> list, it will not generate automatic messages. But I'm probably going too 
> fast with this, because if this is possible, then it should be possible 
> too to do the handling completely unlocked. And there must be a pretty 
> good reason why it isn't so already...

Well, the dispatch lock protects processing the message at the head of
the incoming message queue. The functions that try to touch this are:

 - dbus_connection_pop_message()
   - takes the lock, removes the message for good, drops lock
 - dbus_connection_borrow_message()
   - takes and holds the lock, returns message temporarily with 
     lock held
 - dbus_connection_return_message()
   - puts back borrowed message and unlocks
 - dbus_connection_steal_borrowed_message()
   - converts a borrow into a pop, and unlocks
 - dbus_connection_dispatch()
   - effectively this does a borrow_message(), then 
     runs the various handlers. If any handler returns
     NEED_MEMORY it returns the message and unlocks, 
     if the handlers succeed then it steals the borrowed
     message permanently and unrefs it
     (I was thinking, it should probably be implemented in 
     terms of the borrow/return code which would move all 
     the locking logic to one place...)

I _think_ that if we didn't have out-of-memory handling,
dbus_connection_dispatch() could simply pop_message() at the top and not
hold the dispatch lock while running handlers. The bus daemon does need
OOM handling though.

There may be an ordering problem also... i.e. right now we're guaranteed
to run all handlers for message 1, then all for message 2, etc. If we
didn't keep a lock over the handlers, we would expect to have no
guarantees about this. That would be pretty disastrous for a threaded
app I suppose, since ordering of signals is normally significant.
You don't want to get the "toggled, new value = true" signal and the
"toggled, new value = false" signal in the wrong order.

Hum. Just thinking, in the NEED_MEMORY case we would normally run all
handlers up to the one that got OOM a second time, I would think this
causes problems for signal handlers.

While I'm thinking about bugs in dispatch(), we also need to be checking
a flag or function != null on the filters to be sure they haven't been
removed mid-dispatch, since we make a copy of the filter list prior to
walking it.

I was just thinking of crazy ideas, like:

 - instead of having to run or not run all handlers in one call to 
   dispatch, maybe we should convert each message into a queue of 
   handlers which is persistent across dispatch calls ... does it help?
   maybe fixes the "multiple running of one handler on OOM" at least

 - does it help to add some public API like the check_for_reply()
   in dbus-connection.c
   (essentially add a way to implement send_with_reply_and_block()
    yourself, only blocking in the main loop instead of on the dbus 
    socket - right now you can't "look ahead" in the message queue
    with the public API, so you're forced to either dispatch or block 
    in order to make a method call)

But really I'm too tired to figure this out tonight ;-) please send more

I feel like the usage of dispatch() with threads is maybe too undefined
- I doubt anyone would just call it from a bunch of threads at once with
no locking scheme of their own, right?

A more plausible scenario to me is that there's a main loop thread that
always calls dispatch(), and then some other threads that do
send_with_reply_and_block() so they need to watch the message queue for
their reply, but not dispatch the whole queue.

So maybe the right solution is to declare the right rules about how
pop() and dispatch() are used with threads and what sort of locking you
have to do yourself, and then we can simplify things on the libdbus
level. If for example dispatch() were always called from the same thread
we can probably use the handler queue concept to be robust against
same-thread recursion and OOM, and just not have the dispatch lock.
Modifying the handler queue would require the connection lock, but we'd
drop it while actually invoking each handler.

Sounds plausible, we'll see how it looks in the morning ;-)


More information about the dbus mailing list