outside-dispatch-lock handlers

Thiago Macieira thiago.macieira at trolltech.com
Thu Mar 2 09:54:20 PST 2006


Havoc Pennington wrote:
>> >OK so say we have signals A, B, C, with handlers:
>> >
>> >A: 1, 2, 3
>> >B: 4, 5, 6
>> >C: 7, 8, 9
>> >
>> >while dispatching A, inside handler 2 we dispatch again twice; we'd
>> >first run handler 3, and then queue and run 4, 5, 6.
>>
>> I'd rather 3 weren't run until 2 finished running.
>
>That's what happens now, but it seems a bit worrisome to me because it
>means 2 can "time warp" 3, instead of only itself.
>
>So for example if we have A and B as two signals:
> A: FlagChanged new value = true
> B: FlagChanged new value = false
>
>say that handlers 3 and 6 are separate invocations of the same function,
>which stores the latest value. We'd like to run them in order so we've
>stored the correct latest value.

Hmm... that's a good point.

Yes, I agree with you that recursing should never "jump the gun": it 
shouldn't cause messages or handlers to be processed in a different order 
(except for the OOM condition, which is special).

I have to test here, but it looks like Qt might be doing exactly that with 
its "queued signal delivery" mode. I'll test and, if it confirms, I'll 
ask around if this is the intended behaviour and why.

>> Hmm... not sure I agree. I've got to think a bit more about the
>> consequences, but I'd rather dispatching were "atomic". I.e., either
>> dispatch() does it fully or it does not. And recursively calling
>> dispatch() won't resume an earlier dispatch -- it can only start a new
>> one.

I take this back.

> - whether dispatching each message has to stay on the same "level"
>   as it started, i.e. the above discussion about whether the
>   recursive dispatch keeps going on the current message or not

No, you're right. We cannot do that for one message only. Either we do it 
for no messages or for all.

If we were to make all messages be delivered at the same dispatch level, 
we'd need to distinguish those messages that must be delivered by nested 
dispatch. That's the DCOP solution: it would be to implement 
context-based processing. This would require, however, a new field in the 
protocol header.

When doing a spontaneous "send_with_reply_and_block", the outgoing message 
gets a new locally-generated field called "context", that is added to the 
method call message. It would read any message from the socket and, like 
it does now, it would queue and not handle any messages that weren't 
interesting.

But unlike it, the new function will handle signals and method calls that 
come back with the same context ID.

The other side of the equation is that, the dispatch() function needs to 
save a thread-local variable (or take a recursive lock and save it in the 
connection) with the context ID for the incoming message. While this 
value is set, any outgoing messages (errors, method replies and new 
calls) get the same context ID, instead of generating a new one.

This isn't much different from the current message serial: in fact, 
the "context ID" could be constructed out of the connection's unique 
connection name, plus the serial. The big difference is that new method 
_calls_ get the same context ID and a new message serial if they are sent 
while inside a recursed dispatch().

It is important to distinguish the context ID from the message serial 
because it can be passed to other applications.

>> >When the
>> >recursive dispatch returns, then handler 2 would be running _after_
>> >handler 9. However, a handler does not jump _other_ handlers out of
>> >order - handler 3 still runs before 4.
>>
>> I wouldn't say "after 9", but "handler 2 is running _around_ handlers
>> 4-9". I still think handler 3 should be run after 2 has finished,
>> since it's processing the same message.
>
>Maybe this differs for signals and method calls?

Just semantics, because "handler 2" was running before it caused 
dispatch() to run and will resume running after dispatch() finished.

>> But going back to your A, B, C & 1-9 example, if thread α called
>> dispatch() to handle message A and, before it finished, thread β
>> called dispatch() too, why not let it handle message B?
>
>I _think_ if this is allowed then we can't preserve ordering, because
>there's no lock around invoking the handlers for A and invoking the
>handlers for B. So even if there's a lock around popping A and B (so the
>threads _get_ the messages in the right order), there's no lock to make
>the threads run handlers in the right order.

As I explored in my other email, if the binding is doing this, they must 
have a provision for relaying the messages to the correct threads.

>  α gets A
>  β gets B
>    [ enter free-for-all zone ]
>  α runs one handler for A
>  β runs a couple handlers for B
>  α runs some more handlers for A
>  β runs a couple handlers too
>
>I think the only way to make this sane would be to bind specific kinds
>of message (say the PropertyChanged signal or something) to specific
>threads, so the same kind of message would remain correctly serialized.
>But that sort of high-level thread model would have to be in the
>binding; the binding would just install a handler that forwarded
>messages to other threads, but we'd still only have one dispatch thread
>at the dbus level.

Exactly. If the binding calls dispatch() from multiple threads, it must be 
pretty confident that it can handle this kind of situation.

Of course, it makes our lives more difficult if we were to implement the 
context-based processing (thread-local storage).

Another solution is to just say "screw it" and let the binding do the 
locking.

>I think the only unresolved issue then is whether recursion "jumps" all
>remaining handlers for a message ahead in the message queue, or only
>jumps the single handler that does the recursing.

Recursing should resume processing where it left off, even if it is not 
finished.

-- 
Thiago José Macieira - thiago.macieira AT trolltech.com
Trolltech AS - Sandakerveien 116, NO-0402 Oslo, Norway
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 191 bytes
Desc: not available
Url : http://lists.freedesktop.org/archives/dbus/attachments/20060302/6e169db2/attachment.pgp


More information about the dbus mailing list