Network Setup With runC Containers
Note: This post is part of a series Beginner’s Guide to runC
So far we went through setting up basic containers with runC, managed them and even ran them without as regular users. What we are missing at this point is the big piece of networking. OF course containers can be used without any network connectivity but many use cases of containers need the containers to talk to the host and/or other containers.
We need to understand that when we start a runC container with the spec config, there is no network of any sort(except for the loopback interface which is not enough to talk to others) . We need to setup network inside the network namespace container runs with. For this we can use ‘ip’ command with ‘netns’ option. I’ll make the configuration commands in bold, os we can see those separately from the commands to check/verify the results. Let’s list our current network namespaces we created:
$ sudo ip netns ls
$
So there is nothing so far. So let’s create one and list again:
$ sudo ip netns add alpine_network
$ sudo ip netns ls
alpine_network
Then we add a veth pair which is basically a virtual network cable between the host and the container where anythign that goes in from one side comes out form the other side.
$ sudo ip link add name veth-host type veth peer name veth-alpine
$ sudo ip link ls
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 0a:50:85:5d:6a:b4 brd ff:ff:ff:ff:ff:ff
3: veth-alpine@veth-host: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 1a:da:ab:13:da:e9 brd ff:ff:ff:ff:ff:ff
4: veth-host@veth-alpine: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether aa:4a:f6:10:c6:90 brd ff:ff:ff:ff:ff:ff
So here we now have 2 more network interfaces added to default namespace (which is used by the host). Let’s move the veth-alpine to the network namespace we created.
$ sudo ip link set veth-alpine netns alpine_network
$ sudo ip link ls
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 0a:50:85:5d:6a:b4 brd ff:ff:ff:ff:ff:ff
4: veth-host@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether aa:4a:f6:10:c6:90 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Alright. We moved the 3rd interface (veth-alpine) to another network namespace so we donot see it here. But is it where it is supposed to be?
$ sudo ip -netns alpine_network link ls
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: veth-alpine@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 1a:da:ab:13:da:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
As we can see here veth-alpine is now in alpine_network(-netns alpine_network option on the ip command tells ip command to use that space instead of default)
$sudo ip netns exec alpine_network ip addr add 192.168.10.1/24 dev veth-alpine
$ sudo ip netns exec alpine_network ip link set veth-alpine up$ sudo ip netns exec alpine_network ip link set lo up
$ sudo ip -netns alpine_network addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: veth-alpine@if4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
link/ether 1a:da:ab:13:da:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.10.1/24 scope global veth-alpine
valid_lft forever preferred_lft forever
So we set an IP address on the interface and brought it up (We also brought up the lo-loopback address which we did not have to but it’s a good thing to have if the container will talk to itself on that) . The last command verifies this.
We should also bring up the other side of the veth(veth-host), so they can communicate.
$ sudo ip link set veth-host up
So now, let’s setup routes on the host and the container to finish this setup.
$ sudo ip route add 192.168.10.1/32 dev veth-host
$ sudo ip route
default via 172.31.16.1 dev ens5
169.254.0.0/16 dev ens5 scope link metric 1002
172.31.16.0/20 dev ens5 proto kernel scope link src 172.31.27.197
192.168.10.1 dev veth-host scope link
$ sudo ip -netns alpine_network route
192.168.10.0/24 dev veth-alpine proto kernel scope link src 192.168.10.1
$ sudo ip netns exec alpine_network ip route add default via 192.168.10.1 dev veth-alpine
We finished this network setup. Now let’s see if we can do the basic network test. Can I ping the container IP from the host?
$ ping 192.168.10.1
PING 192.168.10.1 (192.168.10.1) 56(84) bytes of data.
64 bytes from 192.168.10.1: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from 192.168.10.1: icmp_seq=2 ttl=64 time=0.028 ms
^C
— — 192.168.10.1 ping statistics — -
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.028/0.029/0.030/0.001 ms
Yay! We are now able to ping the container IP from the host.
Run our container in the new Network Namespace
So we have a network setup of a new network name space and an interface in it that we configured routes to talk to. Where is the container? It’s time to put a container in this new network space .
Network namespaces are defined under ‘/var/run/netns’ . Let’s check:
$ ls -l /var/run/netns
total 0
-r — r — r — . 1 root root 0 May 20 15:14 alpine_network
$
To set network namespace of container we modify config.json file as :
“namespaces”: [
{
“type”: “pid”
},
{
“type”: “network”,
“path”: “/var/run/netns/alpine_network”
},
Let’s start an alpine container with this config and check interfaces/routes. Note that I’m not using -namespace option for the ‘ip’ commands inside the container:
# runc run alpine
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: veth-alpine@if4: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 1a:da:ab:13:da:e9 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.1/24 scope global veth-alpine
valid_lft forever preferred_lft forever
inet6 fe80::18da:abff:fe13:dae9/64 scope link
valid_lft forever preferred_lft forever
/ # ip route
default via 192.168.10.1 dev veth-alpine
192.168.10.0/24 dev veth-alpine scope link src 192.168.10.1
/ #
Tomcat runC container with networking setup
In Quick start to runC with Tomcat container article, I had said “Of course at this point network is not configured, container is running without any network setup whatsoever, but this gives a good idea of running a simple container with runc” . So I’m going to revisit that Tomcat container again. Let’s change the config.json file of the Tomcat container to join this network namespace, with no other changes. We’re just adding the ‘path’ to the network namespace instead of a blank one:
“type”: “network”,
“path”: “/var/run/netns/alpine_network”
Start the container:
# runc run tomcat
Using CATALINA_BASE: /usr/local/tomcat
Using CATALINA_HOME: /usr/local/tomcat…..
So, the big test is can I reach tomcat from the host machine? In another window, I’m going to use ‘curl’ command line client to send an http request form the host to the Tomcat running inside the container.
$ curl http://192.168.10.1:8080<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8" />
<title>Apache Tomcat/9.0.19</title>
Yes.
This series about runC continues here → runC and Docker Together
Happy containerizing…