dbus_connection_read_write_dispatch sometimes causing 100% cpu usage
burton at userful.com
Fri Sep 10 14:15:36 PDT 2010
Havoc Pennington <hp at pobox.com> writes:
> On Fri, Sep 10, 2010 at 4:04 PM, Burton Samograd <burton at userful.com> wrote:
>> Not sure how that's happening, but it might be due to a setjmp/longjmp
>> being called from an X IO error handler (which I'm not sure is being
>> called asynchronously or not...the X docs aren't clear on that).
> Ugh, I don't think longjmping out of an X IO error handler is a
> remotely good idea ... I'd certainly treat it as a suspicious possible
> cause of the problem.
I'm doing a longjmp out because the docs say that returning from the IO
error handler will exit the program no matter what, which is not an
option for what I'm doing (a daemon that puts up windows on multiple X
servers on a single system when the X servers might die
unexpectedly)...so it can't exit on the error and that was the only
solution I could come up with to 'not return' from the error handler.
> (There are newer versions of libX11 that will let you handle the IO
> error without exiting, I think, if you have an option to use those. If
> not, you might consider an LD_PRELOAD hack instead of a longjmp hack.)
> If you are doing X stuff inside a dbus dispatch, it could certainly
> cause the problem. If you must longjmp, try to fix it not to jump over
> any dbus stack frames - setjmp() just before the X calls.
Don't have an option about which libX11 I can use since we're developing
for Ubuntu 10.04. I'm not doing any X stuff during any dbus dispatching
functions, so I'm not sure if that's the problem.
Anyways, it seems to be solved at least enough for our beta release
today so I'll wait to see if testing finds the bug again. I can't seem
More information about the dbus