[systemd-devel] Compatibility between D-Bus and kdbus

Thiago Macieira thiago at kde.org
Tue Nov 25 18:32:16 PST 2014


On Wednesday 26 November 2014 01:25:18 Lennart Poettering wrote:
> > Thinking of non-system buses here.
> > 
> > If the variable is empty, I agree that it should have an equivalent of an
> > "autostart" mechanism, but I disagree on the solution and I also disagree
> > that distros should leave it empty.
> 
> Oh, no. No autostart please. No such concept exists in kdbus, and
> systemd/sd-bus will not support that either. In fact I refuse to support
> that even on dbus1 in sd-bus. Autostart is a kludge for systems where dbus
> is just an add-on, but that's completely out-of-focus for kdbus,
> systemd and sd-bus.

I didn't actually mean automatically starting the bus, sorry for the 
confusion. I meant automatic discovery only. Currently on dbus1, "autostart:" 
as a transport protocol means both auto discovery and automatic starting if 
the bus isn't running.

Please also note that the autostart solution has a valid use-case which is 
when a D-Bus application is launched in an environment where no bus had been 
started before. I understand this is out-of-scope for kdbus, since after all a 
regular user won't be able to create a kdbus bus if one wasn't provided by a 
privileged process before. In an environment where a kdbus bus wasn't 
provided, the only alternative is to fallback to dbus1.

> Note that even on systemd we will set $DBUS_SESSION_BUS_ADDRESS,
> simply because classic libdbus and gdbus won't work without
> it. However, we will actually set it to a fixed value.

That's all I asked for. Whether the value is constant or not is not relevant, 
as long as it gets set.

> > For one thing, the fallback address is expected to be there if there's a
> > proxy bus running. The current autostart mechanism relies on X being
> > present, so the fallback won't be found unless X is running and something
> > registered the proxy's socket address there.
> > 
> > For another, it's good practice to have it set and not depend on
> > autostart.
> > 
> > For a third, hardcoding kernel paths in userspace sounds like a poor idea.
> > The kdbus mountpoint may be elsewhere and whatever is creating buses may
> > not do it per user, but per session or other creation rule it may have.
> 
> No, we don't support weird setups where kdbusfs mounted
> elsewhere. This is a bew API we introduce here, and we can very much
> make decisions where stuff is to be mounted.

You may not support it in systemd, but from reading the kernel API that could 
happen with another implementation.

> Env vars are a hack, due to the awful inheritence logic, and we should
> really avoid using them, except where necessary for compat, and that's
> precisely to which level we'll support them in systemd.

Env vars actually match pretty well the concept of session, except when nested 
sessions happen without resetting the necessary environment variables (e.g., a 
screen(1), tmux(1) or Xnest sub-session).

I know you're designing systemd so that it won't provide session buses, which 
is why it feels like a kludge to you. I won't argue here. I'm satisfied that 
the env var gets set.

> > > > would be interesting to have:
> > > No, this is not supported in the current versions of kdbus
> > > anymore. Emulation of these calls must happen client side if it shall
> > > be supported.
> > 
> > That wouldn't be kdbus, but systemd doing it. Since systemd is the one
> > that
> > opens the bus, it can register the first connection and claim the
> > org.freedesktop.DBus service name, providing compatibility. So this isn't
> > a
> > feature request for kdbus but a feature request for systemd.
> 
> We initially tried to support that, but it's awfully racy, since the
> driver calls and calls to other services wouldn't be executed in
> strict order anymore... We removed this again after figuring out and
> decided that emulation can only happen client side, synchronous to the
> message stream if we want to guarantee correct ordering.

I'm not asking for AddMatch and connection control mechanisms. The one I 
really want is StartServiceByName, since it can't be emulated. Moreover, 
starting services is systemd's raison-d'ĂȘtre, so I feel it should be no 
problem for you to provide such a service.

It would be nice if UpdateActivationEnvironment worked. This functionality was 
added for people who need to update variables like XDG_DATA_DIRS after 
starting the bus. If this one isn't present, we can report "not implemented" 
and be fine with it. We'll just have to tell people to configure their systemd 
user session environments properly.

The same goes for ReloadConfig, but I'd prefer to know whether that failed (no 
config reloading is possible) or whether it happens automatically whether the 
call was made or not. ReloadConfig is important when there are new activatable 
services on a user's bus, such as newly-installed applications.

> > > The client side emulation can choose to either forward ReloadConifg
> > > and UpdateActivationEnvironment to the respect systemd calls, or just
> > > return som "not supported" error.
> > 
> > Can't do that. What if it's a kdbus system that is not systemd?
> 
> Well, again, return "not supported" then. I mean, currently there is
> no kdbus userspace implementation beyond kdbus, we cannot really
> discuss something that doesn't exist...

I assume you meant "beyond systemd" there.

> Note that on dbus1 systemd systems we actually never provided
> UpdateActivationEnvironment correctly (since services got forked off
> PID 1, instead of dbus-daemon but the call would alter dbus-daemon's
> env block, not systemd's one), but nobody ever noticed. I really
> think you should just return some "not supported" error or make it a
> NOP if you don't want to pass this on to systemd.

I *want* to pass this to systemd, somehow. So the first question is whether we 
can expect there to be a connection by systemd in the buses it creates. On the 
system bus, there's org.freedesktop.systemd1 so I expect that to continue. Can 
we expect a similar service on systemd-user buses?

The second question is the naming of such a connection. I'd rather have a 
connection name and interface name that were not specific to systemd. It seems 
to me that the neutral name to provide here is org.freedesktop.DBus, with a 
different interface name so that we have only those member functions that make 
sense inside a kdbus-powered bus.

> > By the way, is there a way to ensure that a given connection is the first
> > connection? As soon as the bus creator is able to connect to the
> > /sys/fs/kdbus path, so is another process and therefore this other
> > process could maliciously acquire names it shouldn't.
> 
> When creating the bus the creator can pass policy to the kernel so
> that there is no time window where the bus is accessible and open to
> manipulation from untrusted clients.

How can you update the policies then? 

I imagine that the typical scenario here would be to permit connections from 
the same PID to be considered "privileged", so they can install new policies 
and activators.

> > > if you want to create a new endpoint for an existing bus, then invoke
> > > that ioctl on the bus fd. The control file after all is unrelated to
> > > any bus, and thus wouldn#t know which bus you mean if we'd allow
> > > invoking that ioctl on it.
> > 
> > Ok, so any application that connected to the "bus" bus can then create
> > custom endpoints. Correct?
> 
> You need privs (either CAP_IPC_OWNER or matching uid) for that.

Understood. But that means a user's application connecting to the bus in 
$DBUS_SESSION_BUS_ADDRESS is able to create new endpoints.

> > How does one get to install policies or activators on this custom bus if
> > the opening connection is a regular, non-privileged process?
> 
> the policy you can specify when you open the custom EP... (not sure I
> grok the question though).

You got it right. The question on updating the policies above remains, though.

I'm exploring here the possibility of (ab)using the endpoint creation to 
implement a P2P connection. That is, use a bus where only two clients connect. 
In order to implement that, we'd need a convention for the two clients to find 
each other:

 - if the connection IDs are unique per endpoint, then you can assume that the 
bus server is ID 1

 - if they aren't unique or if it's impractical to assume that the endpoint 
creator gets ID 1, but service names are unique, then we simply establish a 
convention of what name to acquire (e.g., "org.freedesktop.DBus.P2PServer")

 - if service names aren't unique, then we come up with a random name and put 
it in the socket name that the other side needs to receive in order to connect 
anyway

In fact, the last solution doesn't even require a separate endpoint...

> What's the usecase?
> 
> I mean you can fake p2p connections by allocating a bus and only
> connecting two peers to it (busses are relatively cheap now), but I am
> not sure why.

Because kdbus has a few extra advantages that sockets don't, like the zero- or 
single-copy from process to process. And because we can: if the necessary 
delta to support P2P is minimal, just a convention, then it's more practical 
to keep everything centralised than to have an AF_UNIX socket handler and a 
kdbus fd parser.

> Well, I doubt the usecase for direct links.
> 
> I mean, the reasons for peer-to-peer links I am aware of are:
> 
> a) performance
> b) network transparency
> c) IPC before dbus-daemon is around
> 
> a) and c) don't apply on kdbus anymore. And kdbus is inherently not a
> network transport, hence you have to use AF_INET there anyway.

Plus d) connection between processes of different privileges (e.g., different 
UIDs)

In any case, applications are already using P2P buses. As a library writer, 
I'd like to provide them with a seamless transition. Applications shouldn't 
have to know whether the main bus is kdbus before deciding how to communicate.

I'm assuming here that kdbus performance is considerably better than P2P dbus1 
on Unix sockets. If that's the case, then applications should prefer to use 
kdbus over P2P Unix sockets for large data transfers because it's now efficient. 
If that's not the case, then the only reason to do this would be to simplify 
implementations.

> > Also, is there any way to ask an endpoint to stop accepting new
> > connections
> > without tearing down the existing ones?
> 
> You could just take away the access bits.

Thanks.

> > Because I thought that the activator may be one process for all possible
> > services. I'm guessing this is not the way you'd envisioned it. Otherwise,
> > if you have 200 activatable services, there are 200 connections by one or
> > more process. There's no bus daemon to run out of fd's here, but they
> > would count towards the user's system-wide file descriptor limit.
> 
> Yes, systemd maintains one fd per bus-activatable name, that is
> correct. And it bumps the NOFILES limits to make sure that works.

Correct me if I'm wrong, but doesn't the kernel impose a per-UID limit on the 
number of FDs open?

If so, wouldn't a user with tons of activatable services cause a considerable 
consumption of this limited resource because systemd-user opened a lot of FDs?

> > > > The docs say that it only succeeds if there are no more messages, at
> > > > which
> > > > point no further messages will be accepted. There doesn't seem to be a
> > > > way
> > > > of doing a shutdown()-equivalent: stop reception of new messages but
> > > > still process the queued ones.
> > > 
> > > What's the precise usecase for this?
> > 
> > "I've been requested to exit, so I am going to exit now" This tells the
> > kernel to stop sending me messages, so I am able to exit. If there are
> > more after this, they'll be queued for the activator again, if there's
> > one, rejected otherwise.
> 
> Well, but you could just process what you want, and not read from the
> fd anymore. Then you exit, leaving the messages in the fd unread. The
> kernel will then activate the process again and pass the new messages
> to it. I am not really sure what the usecase is for telling the kernel
> explicitly that you don't want more messages...

Right that works for activatable services.

For non-activatable services, any unhandled calls will get 
KDBUS_ITEM_REPLY_DEAD sent back to the sender. The only difference from what I 
was asking is timing: with a KDBUS_CMD_SHUTDOWN, the error condition would 
come as soon as the message was sent, as opposed to when the receiver closed 
the fd. I don't think that is a problem.

> > Then glibc should be fixed to have _POSIX_MONOTONIC_CLOCK set to 200809L.
> > That saves us a sysconf() call to verify whether it's present or not.
> > 
> > http://osxr.org/glibc/source/sysdeps/unix/sysv/linux/bits/posix_opt.h#0093
> > http://osxr.org/glibc/source/nptl/sysdeps/unix/sysv/linux/bits/posix_opt.h
> > #0161
> > 
> > if you know someone influential there to make it happen, it would be most
> > welcome.
> 
> File a bug to glibc.

Right. I was just wondering if you knew someone influential so we can get this 
simple change in quickly.

> > > > === Wildcards ===
> > > > 
> > > > Are you sure that * not matching a dot is a good idea? What is the
> > > > rationale behind it?
> > > 
> > > Hmm, what precisely is this about? wildcards about?
> > 
> > Just wondering why the * does not match the dot. I'd assume the more
> > common
> > case is to match a full prefix and that includes match dots.
> 
> Hmm? * in what precisely? missing the context here...

11.2 Wildcard names

"Policy holder connections may upload names that contain the wildcard suffix 
(".*"). That way, a policy can be uploaded that is effective for every
well-kwown name that extends the provided name by exactly one more level."

As I said, I kind of expect that the normal case is that the policy applies to 
the entire tree, not just one more level.

-- 
Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
   Software Architect - Intel Open Source Technology Center
      PGP/GPG: 0x6EF45358; fingerprint:
      E067 918B B660 DBD1 105C  966C 33F5 F005 6EF4 5358



More information about the systemd-devel mailing list