[systemd-devel] Linking containers

Lennart Poettering lennart at poettering.net
Sat Feb 28 09:33:36 PST 2015


On Fri, 27.02.15 07:42, Peter Paule (systemd-devel at fedux.org) wrote:

> > networkd is by default shipped in a way now that it will do dhcp
> > server stuff on the host for the network interfaces exposed by nspawn,
> > and dhcp client stuff in the container for the interface it sees
> > there. 
> 
> I'm not quite sure if I understand you correctly. To make this clear for
> me. These are TWO separate daemons: One running in "server mode" on the
> host. And one running inside the container in "client
> mode". Correct?

Well, there's no "mode" concept in networkd, and hence no "server
mode" and no "client mode". Instead, we simply ship with two network
configuration snippets by default that match against the veth ifaces
inside and outside of a container. In particular, it's this snippet
that makes sure the container side is configured:

http://cgit.freedesktop.org/systemd/systemd/tree/network/80-container-host0.network

And this one that esnures the host side is configured:

http://cgit.freedesktop.org/systemd/systemd/tree/network/80-container-ve.network

Both files are always shipped, but due to the [Match] section the
former will only do something in the container, the latter on the
host...

> > This stuff works now, and is really how I recommend what people
> > use on the host and in the container if they use nspawn. (Note that
> > networkd on the host only takes possession of these nspawn network
> > interaces by default, hence it's completely safe to run this in
> > parallel with NM or some other network configuration scheme, which is
> > used for the other interfaces).
> 
> I'm not sure, what you mean with "now". Does this work in 219
> already?

Yes. If you build systemd with networkd, then this just works. It's
the default config. The two snippets apply, and all is good.

> > Then, for all containers that shall be linked one would specify the
> > same --network-bridge-make= parameter, so that they are all added to
> > the same bridge, and can thus communicate with each other. networkd
> > would be configured out-of-the-box to do dhcp server magic on the host
> > side for these kind of bridges, so that IP configuration works out of
> > the box again, fully automatically, as long as the containers run
> > networkd or some other DHCP client.
> 
> Great! May be one would also use virtual switches like
> "vswitch" (http://openvswitch.org/,
> http://openvswitch.org/support/config-cookbooks/vlan-configuration-cookbook/).
> 
> It might be desirable to something like this as well
> http://blog.scottlowe.org/2014/01/23/automatically-connecting-lxc-to-open-vswitch/.
> What do you think?

Sounds like an OK idea, but to do this we'd need proper local
C/dbus/AF_UNIX APIs and I am not sure openvswitch provides those
currently. networkd/nspawn is not supposed to be something that is
glued together via shell scripts and calling external binaries, but
instead is supposed to do things properly, by using the appropriate
APIs.

> > With that in place you could easily create arbitrary groups of
> > containers that can communicate with each other via IP. Now, the
> > question is how they would find each other. For that I'd simply use
> > LLMNR, and each container should just pick a descriptive host name, and
> > that's it. LLMNR support is built into systemd-resolved. As long as each
> > container runs resolved (or any other LLMNR implementation) it will be 
> 
> Great! Does LLMNR work in a routed environment as well? Or does it make
> sense to extend resolved to use "ordinary" dns for that.

The "LL" in LLMNR stands for "link-local", meaning it only works on 
broadcast domains. If we bind the containers together via a bridge
this creates a broadcast domain, and thus makes sure that everything
connected to the bridge is part of the broadcast domain.

> Would it be possible / Does it make sense to extend the veth-logic, so
> that one could add the created interface to a specified bridge? This
> would make some more complex infrastructures with dual-homed machines
> possible. Or is there another way to do this
> 
> [ Client ] -veth1->  [ Web Server 1 ] -veth2+-> [ Web App 1 ] -+-> [ Database 1 ]
>                                             |                  |
>                                             +-> [ Web App 2 ] -+
>                                             |
>                                             +-> [ Web Server 2 ]
> 
> Example
> 
>   systemd-nspawn --network-veth veth0,connected-to=br0 --network-veth veth1,connected-to=br1

Not sure I understand, but there's already --network-bridge= where you
can configure a bridge that the container's veth link should be added to?

Lennart

-- 
Lennart Poettering, Red Hat


More information about the systemd-devel mailing list