Simulate high latency network using Docker containers and “tc” commands

Kazushi Kitaya
3 min readAug 29, 2019

--

With docker containers, it is very easy to simulate high latency network in one machine. In this post, we will see how.

Preparation

We will make the following four files.

  • docker-compose.yml
  • client/Dockerfile
  • server/Dockerfile
  • server/main.go

docker-compose.yml

We are going to modify network-related stuff, so the NET_ADMIN capability is needed.

version: "3.0"
services:
client:
container_name: client
build: ./client
tty: true
cap_add:
- NET_ADMIN
server:
container_name: server
build: ./server
cap_add:
- NET_ADMIN

client/Dockerfile

We need to install iproute2 to use the tc command. Also, we will install the ping command for the testing purpose here.

FROM ubuntu:18.04
WORKDIR /work
RUN apt-get update && \
apt-get install iproute2 iputils-ping -y

server/Dockerfile

Then, the server-side. You can use Nginx container and such, but here, we will use a simple server written in Go. Note that we install iproute2 here, too.

FROM ubuntu:18.04
WORKDIR /work
RUN apt-get update && \
apt-get install software-properties-common -y && \
add-apt-repository ppa:longsleep/golang-backports && \
apt-get update && \
apt-get install golang-go iproute2 -y
ADD main.go .
CMD ["go", "run", "main.go"]

server/main.go

A very simple HTTP server.

package mainimport (
"log"
"net/http"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Hello, World!"))
})
log.Fatal(http.ListenAndServe(":80", nil))
}

Adding Latencies

Now, we are ready.

First, let’s get the two containers running with the following command.

docker-compose up -d

Then, execute the following two commands. These add 100ms latencies to the outbound traffic on the two containers.

$ docker exec client tc qdisc add dev eth0 root netem delay 100ms
$ docker exec server tc qdisc add dev eth0 root netem delay 100ms

Let’s see whether things went well. We are expecting the RTT between the two to be 200ms.

$ docker exec client ping server
PING server (172.19.0.3) 56(84) bytes of data.
64 bytes from server.tc_default (172.19.0.3): icmp_seq=1 ttl=64 time=202 ms

It looks good!

What is happening

When we create a docker container, a pair of virtual network interfaces are created. One of them belongs to the host machine, and the other belongs to the container (network namespaces are used here). And the host-side one is connected to a virtual bridge, enabling the containers to communicate with each other. For more information on Docker networks, please refer to this page.

In Linux machines, outbound network traffic first gets into a “queue.” And usually, they are sent out to the outer world as fast as possible in the FIFO order.

The behavior of this queue is configurable. For example, we can add latencies (like what we did in this post), or we can discard some portion of the traffic randomly. This mechanism is called “Queueing Discipline” or “qdisc.” The “tc qdisc … netem …” command we executed was exactly for emulating high latency network in the qdisc.

“qdiscs” are configurable for each network interface. In our example, we configured the qdisc attached to eth0 network interface. By default, a Docker container has an eth0 virtual network interface, which is connected to the host machine.

Note that we used the tc command in each of the two containers. This is because qdisc only affects outbound traffic. We executed the command in the both to make an actual 100ms bi-directional latency, which better reflects the real-world environments.

This post is also available in Japanese.

--

--