[systemd-devel] Linking containers

Peter Paule systemd-devel at fedux.org
Thu Feb 26 22:42:52 PST 2015


Excerpts from Lennart Poettering's message of 2015-02-25 19:57:10 +0100:
> dhcp client you mean?
Yes.
 
> In general, I am not really keen on doing IP configuration in
> nspawn. We have one solution for doing IP configuration already in
> systemd, and that's networkd, and it's a ton more powerful than
> anything we could add to nspawn.

Ok. I thought there was something done magically using the "outer"
networkd.
 
> networkd is by default shipped in a way now that it will do dhcp
> server stuff on the host for the network interfaces exposed by nspawn,
> and dhcp client stuff in the container for the interface it sees
> there. 

I'm not quite sure if I understand you correctly. To make this clear for
me. These are TWO separate daemons: One running in "server mode" on the
host. And one running inside the container in "client mode". Correct?

> This stuff works now, and is really how I recommend what people
> use on the host and in the container if they use nspawn. (Note that
> networkd on the host only takes possession of these nspawn network
> interaces by default, hence it's completely safe to run this in
> parallel with NM or some other network configuration scheme, which is
> used for the other interfaces).

I'm not sure, what you mean with "now". Does this work in 219 already?

> I am quite interested to find ways to make all these things work without
> using too many container-specific technologies. More specifically this
> means I will always prefer a solution that could be made work for kvm
> containers the same way as for containers.

Sounds good.

> Then, for all containers that shall be linked one would specify the
> same --network-bridge-make= parameter, so that they are all added to
> the same bridge, and can thus communicate with each other. networkd
> would be configured out-of-the-box to do dhcp server magic on the host
> side for these kind of bridges, so that IP configuration works out of
> the box again, fully automatically, as long as the containers run
> networkd or some other DHCP client.

Great! May be one would also use virtual switches like
"vswitch" (http://openvswitch.org/,
http://openvswitch.org/support/config-cookbooks/vlan-configuration-cookbook/).

It might be desirable to something like this as well
http://blog.scottlowe.org/2014/01/23/automatically-connecting-lxc-to-open-vswitch/.
What do you think?

> With that in place you could easily create arbitrary groups of
> containers that can communicate with each other via IP. Now, the
> question is how they would find each other. For that I'd simply use
> LLMNR, and each container should just pick a descriptive host name, and
> that's it. LLMNR support is built into systemd-resolved. As long as each
> container runs resolved (or any other LLMNR implementation) it will be 

Great! Does LLMNR work in a routed environment as well? Or does it make
sense to extend resolved to use "ordinary" dns for that.

> I hope this makes some sense...

Absolutely.

Would it be possible / Does it make sense to extend the veth-logic, so
that one could add the created interface to a specified bridge? This
would make some more complex infrastructures with dual-homed machines
possible. Or is there another way to do this

[ Client ] -veth1->  [ Web Server 1 ] -veth2+-> [ Web App 1 ] -+-> [ Database 1 ]
                                            |                  |
                                            +-> [ Web App 2 ] -+
                                            |
                                            +-> [ Web Server 2 ]

Example

  systemd-nspawn --network-veth veth0,connected-to=br0 --network-veth veth1,connected-to=br1

Thanks a lot.


More information about the systemd-devel mailing list