"DBus Embedded" - a clean break

Kees Jongenburger kees.jongenburger at gmail.com
Thu Jan 20 13:58:58 PST 2011


On Thu, Jan 20, 2011 at 10:42 PM, David Zeuthen <zeuthen at gmail.com> wrote:
> Hi,
>
> On Thu, Jan 20, 2011 at 4:38 PM, Kees Jongenburger
> <kees.jongenburger at gmail.com> wrote:
>> Hello,
>>
>> On Thu, Jan 20, 2011 at 7:10 PM, Thiago Macieira <thiago at kde.org> wrote:
>>> On Thursday, 20 de January de 2011 18:42:29 Kees Jongenburger wrote:
>>>> >
>>>> > Good stuff, I've obviously been being too pessimistic! Are your bench
>>>> > marks measuring different message sizes and processor loads?
>>>>
>>>> That test focuses on latency (not much data is sent over the bus) to
>>>> perhaps this explains some differences?
>>>
>>> See my other reply. The problem that everyone complains is latency, not the
>>> data throughput.
>>>
>>> My experiences show that data throughput is not a problem. Just send more data
>>> in each message. The problem is the big overhead in handling each message.
>>
>> One other thing that also might have impact on these numbers are the
>> amount of allowed concurrent callers to a service. A simple
>> dbus "service" only handles one request at the time(at least with
>> dbus-glib). This greatly simplifies development but also means that if
>> a method has a processing time of one second you can only handle about
>> 60 requests a minute. A typical hammering test situation on a single
>> service will show these numbers.
>
> No. This is not true at all. Neither for D-Bus in general nor with dbus-glib.
>
> (It might be true if your service is single-threaded and you don't
> handle incoming calls asynchronously. But in that case the problem
> exists between chair and keyboard.)

It is the default behavior when using  dbus-binding-tool to generate
synchronous methods(and it is stated a simple way of handling this
problem) so I think a lot of code will follow that pattern.  So also
wonder what the code in the tests look like ....

Greetings


More information about the dbus mailing list