Floating-point and mixed-endianness in D-Bus (was: dbus mini-summit)

Havoc Pennington hp at pobox.com
Tue Aug 9 12:20:41 PDT 2011


Hi,

Simon, fwiw GLib for example also relies on most of the things you
mention, and most likely a lot of other software does too. When
implementing libdbus it was certainly known that this was all in the
category of "not strictly guaranteed by the standard, but reliable in
practice on platforms we care about."

I can imagine some of the assumptions break on some weird
microcontroller: http://www.uclinux.org/ports/

But, I can't remember anyone turning up on this list or the GLib list
or anything complaining.

2011/8/8 Rémi Denis-Courmont <remi at remlab.net>:
> Most protocols either use:
> - the native endianess and not support interchange,
> - a network byte order and require a conversion for LE systems
> (e.g. IETF protocols),
> - little endianess (e.g. USB and many Intel and Microsoft protocols).
>
> I understand the motivation to use the native endianess if possible. But in
> practice, I think it would be faster to just use a fixed endianess and convert
> on the fly in the D-Bus implementations. In practice, systematic (selected at
> build-time) conversion is probably faster than conditional (selected at run-
> time), if only because a byte swap is faster than a conditional branch.

I don't think there needs to be a performance issue for the
conditional conversion. It does not need to be more than one
conditional per message in theory, which will not be even remotely a
measurable overhead. IIRC right now it may be implemented with a
conditional per value, but even then maybe not measurable. If it is
measurable, it's probably in something like large integer/double
arrays, and you could very quickly and easily change it to be one
conditional per array instead of one per array element, even if it's
harder to get to one conditional per message. So I don't think there's
a real performance consideration due to the conditional.

The rationale for the switchable endianness, for what it's worth, was
that 1) same-machine on x86 was expected to be the common case 2)
network byte order would result in byteswapping in that common case 3)
it seemed just as easy to make it conditional, as to convert to a
fixed byte order.

If going to fixed byte order, then I'd probably go with little endian,
because it seems sucky to have the swap overhead in the 95% case
instead of the 5% case. Here again, it probably _only_ matters for
large arrays. Basically the big "win" available is to memcpy a large
array instead of walking over it swapping bytes. Conditional can be
fine as long as you still do memcpy on the entire array when possible,
and fixed byte order could be fine as long as you still did memcpy on
the entire array for x86.

As always my bias is "make changes to the protocol only if it seems
really important to do so" and this seems like a toss-up at best. The
essence of when the CADT criticism (http://www.jwz.org/doc/cadt.html)
truly applies is when "basically a toss up" judgment calls get
reversed back and forth over the years. If it doesn't really matter,
then leaving it alone has appeal. Churning up code inevitably uses up
a lot of people's time.

Havoc


More information about the dbus mailing list