Compatibility between D-Bus and kdbus
simon.mcvittie at collabora.co.uk
Tue Nov 25 03:08:19 PST 2014
On 25/11/14 02:40, Thiago Macieira wrote:
> I'm wondering if the same solution should be applied to the session bus
> An alternative solution would be for the "trusted" connection to check if
> there are any files in /etc/dbus-1/session.d. If there aren't, it can assume
> that trusted == untrusted.
That's an interesting idea.
> PS: we should also find a name that conveys that you *want* the new bus type.
Yes... but it's more like "you should prefer the new bus type, but only
after you have made your code less trusting".
>> I'm also very tempted to propose a syntax for an opt-in kdbus-like
>> security model (which would take precedence over system.conf/system.d)
> I like this idea, with the proviso that Lennart pointed out: we need to first
> include the metadata in the dbus1 stream messages so the application can sort
> out the policy decisions like the kdbus implementation would.
On Unix sockets I don't think we can get more metadata than we already
have (uid, pid, LSM context), apart from possibly the primary GID.
Also, there was some resistance to the "per-message credentials" thing
from Linux kernel people, who seemed to be very much of the opinion that
the "early" credentials (from socket(), connect() or
authentication-handshake time, or the kdbus equivalents) should be the
only thing that mattered; in particular, they didn't like the idea of
propagating CAP_*, on the basis that, for instance, it would be
reasonable for a semi-privileged user/context to be able to ask systemd
"please reboot gracefully if that's OK?", without needing to be
sufficiently privileged to be able to force an abrupt reboot at any time
We could certainly introduce message headers with the semantics of "this
is the uid/pid/LSM context that opened the connection, and no other" to
avoid services needing that round-trip: header fields
INITIAL_UNIX_USER_ID (uint32) and perhaps INITIAL_PROCESS_ID (uint32)?
In principle we could also include the various LSMs' contexts, just like
we could include them in GetConnectionCredentials, but someone who knows
the relevant LSMs and can test and document them should do that, not me.
> This might be a problem. Right now, QtDBus assumes that any match rule it adds
> will be handled successfully. If the resource limit is low enough that an
> application could hit it, we'll need to start handling the failure case.
Yes, and that's hard; GDBus doesn't handle this either. On the other
hand, at least kdbus adds match rules via a synchronous ioctl rather
than an async message.
Somewhere else (in this thread? on the Google Code project?) I suggested
adding an ioctl for "atomically replace this bloom filter with this
bloom filter", which would let a binding consolidate two similar match
rules A and B into a broader rule, say A|B|C, without needing to worry
about exceeding the limit again. Pseudocode:
add match rule D
catch resource limit exceeded:
find two sufficiently similar match rules A and B
atomically replace A with A|B|C
retry adding D
> I'm not sure we need to keep the total ordering like you described. We can
> probably relax it a bit to simply ensure causality.
My experience from implementing a D-Bus subset over link-local multicast
on OLPC/Sugar, which explicitly didn't have better than causal ordering,
is that application authors don't understand causal ordering.
> Will systemd-kdbus provide [o.fd.DBus] on the bus so applications that make
> calls directly be able to continue working?
My understanding is "no, but client libraries may catch those messages
and fake a reply".
There's an ioctl for that
There are ioctls for that
The equivalent is telling systemd to do equivalent things, although that
isn't sufficient if there is going to be a non-systemd implementation of
> Aside from the policy rules, there's no discussion on what a custom endpoint
> can do. Given that, I assume that custom endpoints are fully capable and,
> therefore, can be used for custom application buses (i.e., multiple
> applications, owning names, etc.).
Last time I looked, custom endpoints were "almost fully capable" by
default: they do not implicitly get the SEE permission, unlike the "bus"
endpoint. My understanding is that you're meant to give them to
containerized/sandboxed applications to use instead of the real bus.
> But if that's the case, how would one implement a peer-to-peer connection? Or
> should it simply be a convention that P2P connections are really regular
> buses, except that no one owns any names, there are no policy restrictions and
> that the only two connections are :1.1 and :1.2?
The latter convention is my understanding. (See the bug about
documenting this stuff in the D-Bus Specification.)
> Are 64-bit counters without reuse enough?
> Think of a bus running for a couple of
> years on a server, like Apache httpd using a bus to communicate with its
64 bits are a lot. If my arithmetic is correct, and we pessimistically
assume that this server creates one new D-Bus connection per nanosecond
(10**9 connections per second), it can run for nearly 600 years before
this counter wraps around.
> === The "1." in unique connection names ===
> It's not really necessary. Just because dbus-daemon does it does not mean that
> kdbus needs to. It's not necessary to satisfy the rule that all connection
> names contain at least one dot since unique connection names do not pass the
> validation anyway (the ":" character is not allowed).
The validation rules in the D-Bus Specification clearly do apply to
unique connection names, because one of them ("Only elements that are
part of a unique connection name may begin with a digit") *only* applies
to unique connection names.
I've always interpreted it as "unique connection names have a colon
before the first element, but may not have a colon elsewhere"; it would
be good for the specification to say that explicitly, but that's the
validation rule that at least libdbus and dbus-python use.
> if that is so, how does the activator read past the activation message to get
> to the next one, without dropping it?
I don't think it ever needs to read past: there's one activator per
activatable well-known name, and the only state it needs to hold is "how
long is my queue?". I don't think it needs to look at message contents.
More information about the dbus