A consul, a vault and a docker walk into a bar.

Pierre Carion
8 min readApr 30, 2017

--

When you develop a non trivial application, you often need to split it in multiple components. I will try to avoid the term micro-service to avoid any religious war here, but, at the bare minimum, you often need to access a database, reach some external services or may be access some cloud based services, like S3, to store files.

In that kind of scenario, you’re quickly facing 2 boring problems:

  • how to access those services. For example, what is the host and port of your mysql database
  • where to store your credentials. In the case of Mysql, where do you store your username and password.

One common way to solve both problems, is to store this information in a configuration file. That works, but:

  • that’s not very safe. We all have heard horror stories where password were inadvertently stored in a github repo
  • that’s kind of tedious to change URL services when needed or to setup different environments like development, staging and production.

Hashicorp, a SF based company, not only has a cool name but on top of that offers two open sourced tools to address that kind of problem:

  • consul — Service Discovery and Configuration Made Easy
  • vault — A Tool for Managing Secrets

To test those tools, you can either run them locally on your machine or, as all the cool kids do these days, you can install them in a docker container.

I guess I am cool then because I plan to show you how to do just that.

As a side note, as both consul and vault are written in go, they don’t require any installation procedure per se: those tools come as a single binary file that you only need to put in a directory in your PATH.

In this article, I will be using docker and docker-compose to setup 3 containers:

  • consul.server: the consul agent
  • vault.server: the value server
  • bash.test: a simple bash container to test the access to consul and vault

A pre-requisite for this article is that you have a basic knowledge of docker and docker-compose.

Consul container

Things are easy here as there is already an official docker image for consul in the official docker repository.

The only specific (but non required) configuration I did was to use different external port for consul ion order not to conflict if I was also running a consul container on my host.

I could have left docker pick for me external ports but it would have then been a little bit less convenient to test: I would have to query docker to get the actual port number being used.

The docker-compose configuration for consul is:

consul:
container_name: consul.server
command: agent -server -bind 0.0.0.0 -client 0.0.0.0 -bootstrap-expect=1
image: consul:latest
volumes:
- ./etc/consul.server/config:/consul/config
ports:
- "9300:9300"
- "9500:9500"
- "9600:9600/udp"

The config file that we pass to consul trough a docker volume is config.json and contains:

{
"datacenter": "dc1",
"log_level": "DEBUG",
"server": true,
"ui" : true,
"ports": {
"dns": 9600,
"http": 9500,
"https": -1,
"serf_lan": 9301,
"serf_wan": 9302,
"server": 9300
}
}

If you need more information about those configuration parameters, here are a couple of interesting links:

With this setup, the docker server will be available from the host in HTTP using the port 9500.

Vault container

Vault also has an an official docker image available in the official docker repository.

The docker compose configuration section vault is:

vault:
container_name: vault.server
image: vault
ports:
- "9200:8200"
volumes:
- ./etc/vault.server/config:/mnt/vault/config
- ./etc/vault.server/data:/mnt/vault/data
- ./etc/vault.server/logs:/mnt/vault/logs
cap_add:
- IPC_LOCK
environment:
- VAULT_LOCAL_CONFIG={"backend":{"consul":{"address":"${LOCAL_IP}:9500","advertise_addr":"http://${LOCAL_IP}", "path":"vault/"}},"listener":{"tcp":{"address":"0.0.0.0:8200","tls_disable":1}}}
command: server

I know. The VAULT_LOCAL_CONFIG is a bit messy.

In a more readable form, this variable would read like that:

{
"backend":{
"consul":{
"address":"${LOCAL_IP}:9500",
"advertise_addr":"http://${LOCAL_IP}",
"path":"vault/"
}
},
"listener":{
"tcp":{
"address":"0.0.0.0:8200",
"tls_disable":1
}
}
}

Better right?

  • Vault needs some storage mechanism in order to persist its data in a secure way. You could use the file system, but consul is also an option. We told vault to use consul as a backend here.
  • We could have used a file to store this configuration, but as you can see in the value of a 2 variables, we need the IP address of the host machine that we stored in the LOCAL_IP environment variable. We could solve that problem by using a unix socket for consul, or create a shared network, but ‘.. I’ll let you explore that yourself.
  • In the docker compose file, we change the external port of vault to be 9200 instead of the standard 8200, to avoid any conflict with a potential vault running locally.
  • hint: The LOCAL_IP variable can be set in a .env file in the directory where you start docker-compose. Just saying.

Note:

Based on this github issue, I had to add the advertise_addr parameter to the environment variable, even though this is not really documented.

Without that variable , I would get this error when starting the vault server:

Error detecting redirect address: Get http://192.168.0.16:9500/v1/agent/self: EOF
Error initializing core: missing redirect address

let’s start docker!

With those containers, you can start docker.

docker up!

You can ignore the bash.test for now.

a few docker command

There are a few docker commands which are helpful to get an idea of what’s going on.

  • docker-compose up — to start the containers described in the docker-compose file
  • docker ps — all — to get a list of all your processes
  • docker ps — all — format “{{.ID}} {{.Status}}: {{.Names}} {{.Command}} — same information but in a more terse format
  • docker logs <name of container> — to see the log of a given container such as:docker logs consul.server
  • docker-compose down — to shutdown all the containers

Both consul and docker containers are based on the alpine container : this is a minimal docker image based on Alpine linux.

It is so minimal, that there is no bash, but there is a ash shell interpreter that you can connect to in order to explore the container:

docker exec -it vault.server ash

There is not much you can really do from that shell, that’s why we created a…

bash test container

A more convenient way to explore consul and vault is to create a docker container, with a full bash, and preconfigured to access consul and vault.

To do that, we can write a simple Dockerfile:

FROM ubuntu:16.04
MAINTAINER Pierre Carion <pcarion@gmail.com>
ENV VAULT_VERSION 0.7.0
ENV CONSUL_VERSION 0.8.1
RUN apt-get update \
&& apt-get install -y \
build-essential \
git \
curl \
wget \
vim \
net-tools \
iputils-ping \
dnsutils \
zip \
unzip \
&& wget -O /tmp/vault.zip "https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip" \
&& unzip -d /bin /tmp/vault.zip \
&& chmod 755 /bin/vault \
&& rm /tmp/vault.zip \
&& wget -O /tmp/consul.zip "https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip" \
&& unzip -d /bin /tmp/consul.zip \
&& chmod 755 /bin/consul \
&& rm /tmp/consul.zip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
VOLUME "/mnt/data"
CMD ["/bin/bash"]

Long file, but no rocket science in there:

  • install a set of development tools may want to use in that kind of container (compiler, git, network tools)
  • we download the consul and vault binary files that we unzip in /bin — we will have to update the VAULT_VERSION and CONSUL_VERSION if a new version of those programs are released. We would then have to rebuild the docker image.

The command to build the docker image is:

docker build -t bash.test ./docker_images/bash.test

The last step is to add this container to our docker-compose.yml file.

bash_test:
container_name: bash.test
image: bash.test
environment:
- CONSUL_HTTP_ADDR=${LOCAL_IP}:9500
- VAULT_ADDR=http://${LOCAL_IP}:9200
volumes:
- ./etc/bash.test/data:/mnt/data
command: tail -f /dev/null

Pretty straightforward too:

  • we define 2 environment variables CONSUL_HTTP_ADDR and VAULT_ADDR used respectively by the CLI `consul` and `vault` to access their associated servers
  • we use a dummy command (tail -f) which would block the container and so preventing it from dying.

Once we restart all our containers with docker-compose, we can then attach a bash to our bash.test container by doing this:

docker exec -it bash.test bash

In that shell, you can then verify that you have access to consul and vault:

testing access to consul and vault from bash container

The access to consul seems to be working but the error from vault… is also a good sign.

When a vault server is just started, it is in an not initialized state as describe in that document.

The good news is that we now have a bash script to complete that initialization.

Initializing and unsealing the Vault

The first step is to initializing the vault.

initializing the vault

This step will give you keys to unseal and the root token to access the vault from a client.

Time to unseal:

unseal the vault

Pretty anticlimactic right?

The only information which should give you joy is: Sealed: false.

For our test, we will be using the initial root token provided during the vault initialization BUT that’s not the proper way to use that token. We’ll see in another article how to properly use tokens in applications.

In order for the client to work, you can set the token in an environment variable:

export VAULT_TOKEN=2ca82ba1-840d-908f-e089-1cd539cb9ace

We can now write and read to our new secret store:

read and write to the store

More to come

In this article, we have only explained how to setup consul and vault inside a docker container and verified that the setup was working properly.

In a real life solution, things have to be automated a little bit more and we’ll see in another article how to manage the tokens and how to access to those private data in a real application.

Feel free to post comments if you have any questions or email me at pcarion@gmail.com.

--

--

Pierre Carion

Because the world deserves to read what I have to say, right? Even more at: http://www.pcarion.com