DBus Threading in a Late-loading module

nf2 nf2 at scheinwelt.at
Wed Apr 4 18:33:35 PDT 2007


Thiago Macieira wrote:
> nf2 wrote:
>   
>> I believe that glib main loop integration should be default for KDE on
>> unixes anyway, because it would be a lot easier to write asynchronous
>> common infrastructure libraries.*) But i know that a lot of KDE
>> developers don't like that idea. ;-)
>>
>> *) http://www.scheinwelt.at/~norbertf/common_main_loop/
>>     
>
> I've told you on IRC: I don't believe that to be true.
>
> I know the reaction KDE developers would have to a proposal of making glib 
> mandatory. Regardless of the technical benefits and drawbacks, that would 
> be a heated discussion.
>
> Let's not get into that.
>   
I guess it will be necessary one day because IMHO the "common 
infrastructure layer" below desktops is too thin, which is one of the 
major reason why lots of things still suck (file management for 
instance)- and glib+mainloop is a convenient platform to write those 
infrastructure libraries... But it takes time for people to realize 
that... :-)
>   
> There's no doc for that because there is no API. In perfect Qt-style API, 
> everything is handled behind the scenes for you. No extra library is 
> exposed in the front-end API.
>
>   
I was referring to your sentence "glib main loop integration is optional 
and can be disabled at
runtime". How can you turn it off?

> We can call that dbus_connection_set_watch_functions_if_unset() :-)
>   
The conceptually prettiest solution would of course be Qt using the 
dbus-glib binding along with glib main loop integration...

>   
>
> Some versions of glib had a cumulative performance degradation due to Qt 
> timers -- which are used extensively. I profiled a section of glib that 
> spent a great deal of the CPU time in a tight while loop searching for 
> the proper position (generally, the end) of a singly-linked list.
>
> Since Qt-based applications create and destroy timers a lot, that proved 
> to be a bottleneck.
>   
If it's a single GSource to handle Qt timers, then adding, creating and 
destroying timers isn't even realized by glib main loop. It just changes 
the timeout calculated in timerSourcePrepare(GSource *source, gint 
*timeout).

Perhaps you profiled an application, which almost had no overall 
CPU-time. In this case the CPU-time inside glib main loop of course 
appears to be high (due to it's more complex design).

Another reason might be a of wrong calculation of timeouts in 
qeventdispatcher_glib.cpp, causing the main loop to iterate too often...

Oops: the glib doc says that "For timeout sources, the prepare and check 
functions both return TRUE if the timeout interval has expired." 
Apparently timerSourcePrepare() doesn't do that! It always returns 
FALSE. This bug might cause an unnecessary poll()*) call with a zero 
timeout -> two main loop iterations for a timer which is expired.

And i assume that Qt is using "zero-wait" timers for idle-callbacks. 
("For idle sources, the prepare and check functions always return TRUE 
to indicate that the source is always ready to be processed")

*) AFAIK polls are expensive in glib-mainloop, because it has to line up 
all the fd's found in the Gsources...

http://developer.gnome.org/doc/API/2.0/glib/glib-The-Main-Event-Loop.html#GSourceFuncs

norbert



More information about the dbus mailing list