The magic of linking Docker-containers bypassing environment variables…

Do you know anyone who hasn’t yet heard about Docker? You really do?! Ok… A bit of wiki: Docker is an open-source framework for containerized virtualization management.

Docker is fast… Docker is smart… Docker is swanky!

In effect, Docker moves the goalposts for noble work of server configurations management, of applications builds, of server code execution, of dependencies management, of many other things. See for oneself…

Docker encourages isolated containers either of which executes a single command. That the best architecture for Docker. The only thing the containers have to know is how to find each other. Put in other words, we simply need to know container’s fqdn and port, or its ip and port, that is to say nothing but any external service.

A recommended way to transmit coordinates inside a process that Docker executes — environment variables. Case studies n/a for Docker: DATABASE_URL accepted for Rails framework or NODE_ENVpeculiar for Node.js framework.

Now, environment variables allow an application that is inside a container to find a database at the touch of a button. As the matter stands, a developer should be in the picture of it. Even though application configuration with the help of environmental variables is just the thing, surprisingly applications can sometimes be poorly developed and we still need to run them.

Docker. Environment variables. Links.

Docker succors when we want to interlink two containers and it gives us… wait for it? Right! Docker links. Go ahead to find how it works on Docker.com. In a nutshell, everything should look like below:

  1. Name a container on its start: docker runs-d --name db training/postgres. At that point, we can refer to the container named as db.
  2. Run the second container linking it with the first one: docker runs -d -P --name web --link db:db training/webapp python app.py. The string --link name:alias. has highlights: name -- container name, alias -- name by which a launcher will pick out the container.
  3. This results in a couple of sequels: firstly, web container gets a set of environment variables that points to db container; secondly, in /etc/hosts of web container a db alias, that points to ip address where the container with the database is run, has emerged. The set of environment variables available in web container is as:
DB_NAME=/web/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5

But what if an application isn’t nearly ready to read such variables? Keep calm. socat is coming.

socat

Here is a socat utilty. It is a Unix utility for port redirections. A key idea here is that with the help of the utility an application inside a container will think as if, for example, a database of the same container was launched on the same host on a standard port as it is on a developer’s computer locally. But it is not, eventually. socat is as low weight as any low level establishing and it doesn’t brood the main process of a container.

Let’s take a closer look at the environment variables that Docker links forward inside a container. We are specifically interested in this one: DB_PORT_5432_TCP=tcp://172.17.0.5:5432. The variable has all data we need: a port that should be gotten on a localhost (5432 in DB_5432_TCP) and coordinates of the database itself (172.17.0.5:5432).

The variable is forwarded into the container for each of a link forward: database, Redis, supporting service.

We are writing a special script that wraps a command in a certain way: *to scan a list of environment variables to find those we’re interested in. *to run socat for each of those *to start the forwarded command *to delegate control. As soon as the script is executed he should complete all socat processes.

The Script

A standard set -e header gens up shell to finish the script by the first error. In other words, it requires learned by a developer behaviour.

#!/bin/bash
set -e

So long as additional socat processes are spawned, we need to watch them to complete when relevant and to wait till they are terminating.

store_pid() {
pids=("${pids[@]}" "$1")
}

At that we need a function that starts child processes that we are able to store:

start_command() {
echo "Running $1"
bash -c "$1" &
pid="$!"
store_pid "$pid"
}
start_commands() {
while read cmd; do
start_command "$cmd"
done
}

The main idea here is to get data tuples (external_port,internal_ip_address,internal_port) out of a set of environment variables that have _TCP ending. We need to turn them into a set of commands to start socat.

to_link_tuple() {
sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/\1,\2,\3/'
}
to_socat_call() {
sed 's/\(.*\),\(.*\),\(.*\)/socat -ls TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3/'
}
env | grep '_TCP=' | to_link_tuple | sort | uniq | to_socat_call | start_commands

env activates environment variables list, grep keeps the essential, to_link_tuple extracts desired trey, sort | uniq averts two socats startup for a single service, to_socat_call creates a command we need so far.

We still want to complete child socat processes as soon as the main process terminates. We simply do this by sending SIGTERM signal.

onexit() {
echo Exiting
echo sending SIGTERM to all processes
kill ${pids[*]} &>/dev/null
}
trap onexit EXIT

We start the main process by exec command. This way the control is delegated to the process, we get its STDOUT* so the process begins to receiving STDIN’s signals.

exec "$*"

Feel free to review the entire script in block.

So What?

Embed the script in a container. For example, in /run/links.sh. And start the container this way for now:

$ docker run -d -P --name web --link db:db training/webapp /run/links.sh python app.py

Ta da! The container hosted on 127.0.0.1 at 5432 port has this PostgreSQL.

Entrypoint

We don’t need to keep the script in mind all the time. We simply need to set an entry point by ENTRYPOINT directive in a Dockerfile to an image. This results in a situation where any command that is started in this kind of image is upgraded by the entry point prefix first.

Let your Dockerfile to have the following:

ADD ./links.sh /run/links.sh
ENTRYPOINT ["/run/links.sh"]

and start the container by simply forwarding commands to it. This way everyone is sure that an application sees services of linked containers as if they were launched on a localhost.

But what if we don’t have access inside an image?

Due to the above, there is an interesting puzzle: how we should achieve similar handy proxying of services if we don’t have any access inside the image? In other words, imagine we have the image and we are assured it has socat inside. But, we don’t have our script there and we can’t embed it there. Instead, we can make a triggering command as complex as need be. How do we need to forward the wrapper inside?

An option of sharing a part of the file system with a host comes to the rescue. In other words, we can create /usr/local/docker_bin directory in the host file system. We can put links.sh there and run the container as follows:

$ docker run -d -P \ 
--name web \
--link db:db training/webapp \
-v /usr/local/docker_bin:/run:ro \
/run/links.sh python app.py

As a result, any scripts we store inside /usr/local/docker_bin directory are accessible for container launch

Please note, we use ro option. This way we deprive the container of the opportunity to write to /run directory.

As an alternative way we could derive from the image and to simply add files there.

Bottom line

By dint of a word of encouragement along with socat you are able to achieve a much more convenient way to connect the containers than if you were using only the word of encouragement.

In a favour of the epilogue

A careful and savvy reader should have surely noticed that ambassadord library does largely the same. And in fact, the reader has a point there. The user that needs to make his system working will certainly prefer to use a well-tried turnkey solution. Although, in sober fact, the article is created for the one-off audience. It is like a good joke — the article doesn’t just stare us in the face, it also trains.