[systemd-devel] Linking containers

Lennart Poettering lennart at poettering.net
Wed Feb 25 10:57:10 PST 2015


On Tue, 24.02.15 11:00, Peter Paule (systemd-devel at fedux.org) wrote:

> Hi,
> 
> while playing around with "systemd-nspawn" a lot in the last few days two
> things
> I'm really missing are links between containers like dkr supports
> https://docs.docker.com/userguide/dockerlinks/ and getting an ip within the
> container when running a single application like /usr/sbin/nginx and no
> container-internal dhcp-server.

dhcp client you mean?

In general, I am not really keen on doing IP configuration in
nspawn. We have one solution for doing IP configuration already in
systemd, and that's networkd, and it's a ton more powerful than
anything we could add to nspawn.

The general philosophy with nspawn I try to follow is that we invent
as little new concepts and configuration as possible. Specifically,
for network setup this means we use DHCP and all the other standard
technologies but avoid inventing any addition IP configuration
propagation protocol beyond that.

networkd is by default shipped in a way now that it will do dhcp
server stuff on the host for the network interfaces exposed by nspawn,
and dhcp client stuff in the container for the interface it sees
there. This stuff works now, and is really how I recommend what people
use on the host and in the container if they use nspawn. (Note that
networkd on the host only takes possession of these nspawn network
interaces by default, hence it's completely safe to run this in
parallel with NM or some other network configuration scheme, which is
used for the other interfaces).

I am quite interested to find ways to make all these things work
without using too many container-specific technologies. More
specifically this means I will always prefer a solution that could be
made work for kvm containers the same way as for containers. Solutions
involving env vars (like docker does it) are hence less than ideal
since they do not translate nicely to VMs...

> Are there plans to support something like that in future versions? Or are
> there better options to do the same things?
> 
> Example:
> 
>   systemd-nspawn -x -M db1 -D /var/lib/machines/centos-postgresql
> /usr/bin/postgresql
>   systemd-nspawn -x -M web_app2 -D /var/lib/machines/centos-nginx
> --link-with db1 /usr/sbin/nginx
> 
> I know that that there are some new options introduced with systemd 219, but I
> was not able to make it work for my use case.
> 
> * --port
> * --private-network
> * --network-veth
> 
> Did I understand it correctly, that I need to install systemd-networkd or some
> other dhcp-daemon to get an ip address for now?

Correct. You don#t really have to configure it as mentioned though. It
will just work out of the box.

> # The use case #
> 
> Here's my use case. I would like to run everything in containers to better
> separate web applications which use different software stacks - ruby, python,
> native code etc. - security is important for me but not my main concern using
> containers.
> 
> [ Client ] -->  [ Web Server 1 ] -+-> [ Web App 1 ] -+-> [ Database 1 ]
>                                   |                  |
>                                   +-> [ Web App 2 ] -+
>                                   |
>                                   +-> [ Web Server 2 ]
> 
> My idea is to run an nginx-webserver as reverse proxy in front of some web
> application and other web servers. It should be responsible to route requests
> to the web applications/server.

So, my proposal to solve this would be like this: we could extend
nspawn's --network-bridge= setting so that it would allow creating a
bridge interface when the first container referencing it is created,
and ref-counting it by the containers using it. Maybe via a new switch
--network-bridge-make=foobar or so, where "foobar" is the name of this
bridge to create.

Then, for all containers that shall be linked one would specify the
same --network-bridge-make= parameter, so that they are all added to
the same bridge, and can thus communicate with each other. networkd
would be configured out-of-the-box to do dhcp server magic on the host
side for these kind of bridges, so that IP configuration works out of
the box again, fully automatically, as long as the containers run
networkd or some other DHCP client.

With that in place you could easily create arbitrary groups of
containers that can communicate with each other via IP. Now, the
question is how they would find each other. For that I'd simply use
LLMNR, and each container should just pick a descriptive host name,
and that's it. LLMNR support is built into systemd-resolved. As long as each
container runs resolved (or any other LLMNR implementation) it will be
found by the other containers on the network, and it can find the
other containers. The containers would simply reference each other by
host names, the way the unix gods intended it. There would not be any
further concept of passing configuration data about who talks to who
in this.

With this solution we have something that would work with containers
the same way as KVM: the peers that shall operate together would join
the bridge, and everything else is figured out via the dhcp and llmnr
implementations. In fact, you could even put together crazy
combinations here, where you run a Windows KVM together with a Linux
container, and since both Windows and Linux (via resolved) speak DHCP
and LLMNR they would be able to talk together.

I hope this makes some sense...

Lennart

-- 
Lennart Poettering, Red Hat


More information about the systemd-devel mailing list