[systemd-devel] systemd and varlink

Lennart Poettering lennart at poettering.net
Thu Nov 28 09:39:39 UTC 2024


On Mi, 27.11.24 19:01, Umut Tezduyar Lindskog (Umut.Tezduyar at axis.com) wrote:

> Hello systemd,
>
> We are closely observing the varlink development and excited about
> it. Our guys who were at the all systems go conference mentioned
> that the Resolver is not part of the systemd family. Ref: Resolver
> (https://varlink.org/) . What is the upstream’s thoughts regarding
> service discovery?

So right now in systemd if you want to talk to a service, you just
need to know the socket you have to talk to. Sockets are typically
AF_UNIX sockets in the file system, hence you have pretty expressive
identifiers: file system paths. For example you need to know that
hostnamed's socket is /run/systemd/io.systemd.Hostname, and then you
can for example issue a command like the following from the command
line:

   varlinkctl introspect /run/systemd/io.systemd.Hostname

or call a method:

   varlinkctl call /run/systemd/io.systemd.Hostname io.systemd.Hostname.Describe '{}'

if we'd implement Varlink service discovery this would allow two
things:

1. It would allow you to fire off varlink calls without specifying a
   socket at all, you could then do something like this:

   varlinkctl call - io.systemd.Hostname.Describe '{}'

   or so (this is purely hypothetical, this is not implemented after
   all). And then it would be able to use service discovery to find
   the right socket to talk to. Which is great, because it means
   everything can be figured out from the fully qualified method name
   already, which would make varlink even nicer to use I guess than
   D-Bus, because having a single identifier for a method instead of
   the complexity that D-Bus brings for this (i.e. to invoke a method
   on dbus you need to know which bus, which service name, which
   object path, which interface, which member name, i.e. a quintuplet
   of information instead of just a single identifier).

2. In theory, you could get a somewhat comprehensive list of all
   relevant sockets. So in a way it would be a bit like "cat
   /proc/net/unix" but only showing sockets that you can actually talk
   Varlink on. Kinda. More or less.

Now, in systemd we currently haven#t bothered with implementing the
service discovery part of Varlink for a variety of reasons:

1. We want to use Varlink during earliest boot already, that's one of
   the fundamental goals we had with all this. i.e. before we start
   any services, but the service discovery daemon of course would be a
   service, hence we'd be back in that world where we are in dbus
   where dbus must be up for us to do IPC, which is not an
   improvement.

2. One of the first places we used Varlink was the userdb stuff,
   i.e. this stuff: https://systemd.io/USER_GROUP_API/ – But for that
   the concept implies you have a multitude of services all
   implementing the very same interfaces, so that multiple subsystems
   can provide user records to the system. But that conceptually
   doesn't really fit into the varlink service discovery model, which
   assumes services are singletons: each interface has exactly one
   service that implements it. It after all translates one interface
   name into exactly one socket address.

3. In the Linux world things are often split up between per-system and
   per-user services, and we'd thus need two resolvers, and thus you'd
   probably not really get away with only needing to specify a single
   fully qualified name to invoke a method, you'd also need to
   specifiy a scope after all, hence the gain is not as big as one
   might think.

4. The service discovery part is a bit underspecified, i.e. the
   address format is not part of the spec. Neither is the address the
   resolver listens on specified.

Or to summarize: I think the service discovery is not really necessary
for our usecase, and it probably needs more love and spec'ing out to be
truly useful.

I am not ruling out implementing this eventually. But I think for now
we are fine with expecting specification of both a socket path and a
fully qualified method name for doing method calls.

Something I'd really love to see if we some day could teach varlinkctl
to actually enumerate /proc/net/unix and then filter out sockets that
aren't Varlink. We could for example use xattrs on the entrypoint
inode for that: i.e. unix sockets in the fs that have the
"user.varlink" xattr set to "1" would be discoverable as Varlink
sockets. That would be super nice, because we'd use basic OS concepts
only, and it would just work, and would be extensible to other
protocols too. Alas – the Linux kernel VFS currently hard refuses
setting user.* xattrs on socket inodes. It's an artificial limitation
afaics, so I have hopes this is eventually lifted in the kernel, but
right now, it's there.

> >From the NEWS:
>
> “systemd-machined gained a pretty complete set of Varlink APIs
> exposing its functionality. This is an alternative to the
> pre-existing D-Bus interface.”
>
> For example, how can systemd-machined be discovered?

machined's varlink socket is /run/systemd/machine/io.systemd.Machine,
and this is even documented in its man page. Use "varlinkctl
introspect /run/systemd/machine/io.systemd.Machine" to see what it offers.

> If the varlink interface is an alternative to the D-Bus, how is the
> authorization handled?

Authorization in Varlink is two-fold:

1. There's file ownership + ACLs on the socket entrypoint inode.

2. And there's polkit, just like for dbus. systemd authenticates
   varlink client with polkit in various places. Right now this means
   talking D-Bus for this, but hopefully polkit one day learns varlink
   too, so that we can avoid this. But this should be invisble to
   clients, thankfully. (This implies that in systemd during early
   boot, until D-Bus + polkit are accessible no polkit authentication
   can take place, which hence means systemd-provided varlink services
   only allow root to access them then – which should be fine however,
   given that regular users should not be able to log in that early
   anyway.)

> The JSON transcoding is great for debugging but it is consuming CPU
> cycles in production, especially important for constraint
> devices. Has there been any discussion on using alternative
> transcoding but switching to JSON with monitor connections.

Uh. That's made up, sorry. for large data, yes it can be slower than
binary marshallings, but for smaller data (which varlink traffic is)
this is barely measurable.

The marshalling thing is often brought up by D-Bus apologists, but
it's a bogus argument. The fact that JSON demarshalling is a bit
slower than binary encodings is absolutely dwarfed by the slowness
that D-Bus brings with it because it forces so many roundtrips and a
single data pipe through the broker process. Roundtrips kill your
performance, JSON marshalling is entirely irrelevant for that.

People have done profiling on this. For example zbus' Zeeshan Ali
posted about this on Mastodon some time ago, where he compared zbus
and varlink demarshalling and indeed noticed that for small messages
it's impossible to measure the marshalling difference between varlink
and dbus, but the roundtrip diff very much is.

Lennart

--
Lennart Poettering, Berlin


More information about the systemd-devel mailing list