D-Bus policies. system bus and bus names

David Sommerseth dbus at lists.topphemmelig.net
Thu Jun 29 23:09:15 UTC 2017


Sorry for the long delay, been busy on a few parallel projects and
getting the OpenVPN pieces to fit together.  And it begins to look like
it will work out.

On 31/05/17 13:04, Simon McVittie wrote:
> On Tue, 30 May 2017 at 23:07:08 +0200, David Sommerseth wrote:
>>     <allow own_prefix="net.openvpn.v3.backends"/>
> 
> This is fine. This is what own_prefix is intended for.
> 
>>     <allow send_interface="net.openvpn.v3.backends"/>
> [...]
>> the man page explicitly tells me NOT to only use send_interface in
>> an allow or deny rule.
>>
>> The trouble is that it seems to be lacking a send_destination variant
>> similar to own_prefix.
> 
> I'd be happy to review a spec and implementation for
> send_destination_prefix if you want to contribute one, although that
> won't help you until it's widely deployed.

Right, I'll try to get some time to dig into that.  That sounds like
good idea.

> However, you can emulate it on existing dbus-daemons like this:
> 
> * In an appropriate <policy> to describe legitimate OpenVPN backends
>   (probably <policy user="root">), have
>   <allow own="net.openvpn.v3.any_backend"/> in addition to the own_prefix
>   rule you already have

When you say "any_backend", do you mean that literary ... or is that as
an example for "some reasonable value"?

> * In each backend, request the name net.openvpn.v3.any_backend without
>   using the DBUS_NAME_FLAG_DO_NOT_QUEUE flag (you may specify
>   ALLOW_REPLACEMENT and/or REPLACE_EXISTING, or not, whichever you prefer).
>   Do not consider DBUS_REQUEST_NAME_REPLY_IN_QUEUE to be an error.

Hmm ... and multiple backends can run in parallel?  I think I begin to
grasp what you mean with "any_backend" now, though.

But does this mean I need to own both the proper backend name in
addition to the any_backend variant, and that in each backend process?

Which means, in code, something like this pseudo code:

    GDBusConnection *conn = g_bus_get_sync(G_BUS_TYPE_SYSTEM,
				           NULL, NULL);
    gchar be_name[256];
    snprintf(be_name, 255, "net.openvpn.v3.backends.be%ld", getpid());
    guint dedicated = g_bus_own_name_on_connection(conn,
				be_name,
			   	G_BUS_NAME_OWNER_FLAGS_NONE,
				callback_acq, callback_lost,
				user_data, NULL);

    guint any = g_bus_own_name_on_connection(conn,
				"net.openvpn.v3.backends.any_backend",
			   	G_BUS_NAME_OWNER_FLAGS_NONE,
				callback_acq, callback_lost,
				user_data, NULL);

What I do not quite understand is in which header file the
DBUS_NAME_FLAG_DO_NOT_QUEUE and DBUS_REQUEST_NAME_REPLY_IN_QUEUE are
defined.  I don't see them in any of them defined in the publicly
available header files; but I might have overlooked something.  And
which of these g_bus_*() functions need the DO_NOT_QUEUE flag.


> * In clients, do not send any messages with destination
>   net.openvpn.v3.any_backend, because that would be useless (the client
>   cannot know which member of the queue will get the message).

Makes sense.

> 
> * In an appropriate <policy> to describe legitimate clients that will
>   communicate with the backends (probably
>   <policy context="default"> or <policy user="root">?), have
>   <allow send_destination="net.openvpn.v3.any_backend"/>.
>   If you want to do finer-grained access-control than "only root",
>   "only this daemon user" or "all users", you should use probably polkit
>   instead of adjusting the <policy>: see
>   <http://smcv.pseudorandom.co.uk/2015/why_polkit/> for background.
> 
> In effect, requesting net.openvpn.v3.any_backend is acting as an opt-in
> mechanism: by requesting that name, a backend opts in to allowing
> legitimate clients of OpenVPN backends to communicate with it.
> 
> You might think that the last allow rule I mentioned just allows sending
> messages whose destination field is literally net.openvpn.v3.any_backend,
> but that is not the case. Instead, it allows sending messages that will be
> delivered to *a connection that has requested* net.openvpn.v3.any_backend,
> either its primary owner or any potential owner in the queue - the
> destination field in the message may be the unique name, or any other
> well-known name owned by the same connection.

So that means the policy can be:

   <allow send_destination="net.openvpn.v3.backends.any_backend"
          send_interface="net.openvpn.v3.backends"
	  send_type="method_call"/>

While the proxy client does something like:

   guint be_pid = 1234;
   gchar be_name[256];
   snprintf(be_name, 255, "net.openvpn.v3.backends.be%i", be_pid);
   GDBusProxy *prx = g_dbus_proxy_new_for_bus_sync(G_BUS_TYPE_SYSTEM,
					G_BUS_NAME_OWNER_FLAGS_NONE,
					NULL,
					be_name,
                                        be_object_path,
                                        "net.openvpn.v3.backends",
					NULL,
                                        NULL);

   GVariant *res = g_dbus_proxy_call_sync(prx,
				"Start",
                                NULL,
				G_DBUS_CALL_FLAGS_NO_AUTO_START,
				-1,
				NULL,
				NULL);

What is unclear to me, is if this it is needed to call
g_dbus_proxy_get_connection() and call g_bus_own_name_on_connection()
against "net.openvpn.v3.backends.any_backend" in addition.

>> If I try this on the session bus, the policies doesn't seem to cause any
>> challenges at all.
> 
> The session bus is not a security boundary. session.conf allows sending
> and receiving all messages and owning all names, so further <allow> rules
> are pointless (unless a domain-specific <deny> rule has been added
> and the <allow> rules are opening holes in it).
> 
> The system bus is a security boundary between uids. system.conf allows
> all processes to send signals and receive any message that was validly
> sent, but does not allow sending method calls (or unrequested replies,
> but you should never send those) without further configuration.
> 
> The system bus should perhaps only allow sending *broadcast* signals,
> treating unicast signals as something that must be allowed explicitly, but
> the XML policy language can't currently express that rule
> (<https://bugs.freedesktop.org/show_bug.cgi?id=92853>).
> 
>> The master plan is that an "openvpn management daemon" is first started.
>>  A front-end client (which the user interacts with) passes a
>> configuration to this daemon over D-Bus, and tells it to start a tunnel
>> with that configuration.  This management daemon forks out and
>> "daemonizes" before and then starts the tunnel operations.
> 
> Using fork() without exec(), other than during very early process startup
> as part of BSD-style daemonization, is usually a bad idea: it has a
> tendency to leave global state in a weird mixture of what was correct
> for the parent, and what is correct for the child, unless every library
> in the process is extremely careful to use non-portable facilities
> like pthread_atfork().

Okay, I'll be wary of that.  The backend processes is an independent
binary which is started via fork() + execve().  But I have three other
processes (configuration manager, session manager and log service) which
are fork()ed out before it should receive any D-Bus calls at all.  But
this can be changed.

> If you are using any non-trivial libraries (for example the GLib main
> loop commonly used with D-Bus) then you should probably prefer to run
> a subprocess that is a separate executable, via fork()-and-exec(),
> or some wrapper API like posix_spawn() or GLib's GSubprocess.
> A side benefit of this arrangement is that it gives you better
> portability to Windows, which has APIs analogous to posix_spawn()
> but does not have a direct equivalent of fork().

Speaking of Windows ... D-Bus on Windows, is that a viable approach?
Right now, Linux is the only primary target.  But I need to bear in mind
that we might want to look into an open source Windows client too.

> In your plan, do user processes (the user's GNOME or KDE GUI or whatever
> other front-end client is relevant) communicate only with the centralized
> management daemon, or do they also communicate with the individual backends
> bypassing the centralized daemon? Either way is valid, but I need to give
> slightly different advice depending on which one you have chosen.

It will be a few different processes:

 - System services
   Log service (running as openvpn:openvpn)
   Configuration manager (running as openvpn:openvpn)
   Session manager (running as openvpn

The Session manager will start the "client session", which will run as
"root:root".

What happens on the end-users side:

 - The user logged in runs the client UI as $uid:$gid
   The client UI uploads the configuration files to the configuration

   manager and receives a unique D-Bus object path to that config.

 - Then the client requests a new tunnel from the Session manager,
   providing the configuration path for the tunnel.  In return it gets a
   D-Bus object path for the session.  This forks out the "client
   session" process, which runs as root (currently).

 - The client UI, starts/stops/pauses/requests reconnect by
   communicating with the session manager, which again proxies these
   calls to the "client session" process.

 - Whenever the "client session" process needs some user interaction
   (username/passwords, smart card operations, etc), it signals the
   session manager, which is proxied to the client UI and interacts on
   the signal.

All these processes may produce "Log signals", which the Log service
picks ups and writes to a log file or displays on the console.


Another related topic ... I'm pondering on better ways to kick-off the
backend "client session" processes.  Would it be better to let D-Bus
start these processes, through D-Bus activation?  Or should I aim for
policy kit with some pkexec approach?

>> All this said, the "tunnel/session daemon" doesn't need to be on the
>> system bus.  But I'm concerned what happens if the "management daemon"
>> dies or gets restarted if the session bus is used.  Who does that
>> session bus belong to?  I do want to be able to have a possibility to
>> recover properly while avoiding tunnel interruptions.
> 
> The session bus belongs to either a (uid, machine) pair or a
> (uid, X11 display) pair, depending how D-Bus was configured. It is not
> appropriate for system-level processes, and processes with a different
> uid are not allowed to connect to it.
>
> Because the OpenVPN tunnels are providing networking facilities to the
> whole system, and because they presumably need to run with root
> privileges to be able to manipulate the routing table etc., I would say
> that they need to be on the system bus.

Makes sense.  Then it will be the system bus.


Thanks a lot for very valuable input!


-- 
kind regards,

David Sommerseth


More information about the dbus mailing list