Running Docker application in Autopilot

Luc Juggery
@lucjuggery
Published in
16 min readJan 4, 2017

--

In a previous article, we explained how Docker containers can be ran on Joyent’s Triton platform and we also started to talk about the Autopilot pattern. We’ll now go deeper to better understand Autopilot and how ContainerPilot help setting this up.

TL;DR

The Autopilot pattern moves all the orchestration responsibilities to the application itself. This approach enables the application to run independently without relying on an external orchestrator.

Our simple test application

To keep thing really simple and to understand what is happening under the hood, we’ll build a simple api in Node.js using the expressjs framework. We’ll also use MongoDB as the underlying database. The API will enable to create and list todo items through HTTP GET and POST methods (once again a Todo list, but I think this is a quite good and simple example to clarify things).

Building the API

To make it really simple, the api is defined in a single index.js file in which we connect to the db (using the MONGODB_URL environment variable or mongodb://db/todos if it is not provided) and define the GET and POST routes on /todos. The api runs on port 1337.

The index.js file is the following one.

const express = require(‘express’),
bodyParser = require(‘body-parser’),
MongoClient = require(‘mongodb’).MongoClient,
app = express(),
url = process.env.MONGODB_URL || ‘mongodb://db/todos’;
// Configure app to use bodyParser
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
// Connection to db on startup
let db = null;
MongoClient.connect(url, (err, conn) => {
if(err){
process.exit(1);
} else {
db = conn;
}
});
// Define routes// List all elements in the list
app.get(‘/todos’, (req, res) => {
db.collection(‘todo’).find().toArray((err, result) => {
if(err){
return res.status(500).json({error: err.message});
} else {
return res.json({todos: result});
}
});
});
// Create a new element in the list
app.post(‘/todos’, (req, res) => {
let text = req.body.text;
db.collection(‘todo’).insert({text: text}, (err, result) => {
if(err){
return res.status(500).json({error: err.message});
} else {
return res.sendStatus(201);
}
});
});
app.listen(1337);

Note: the reason why the default url is mongodb://db/todos is because db is the name of the service we’ll define later on (hint: in the docker-compose.yml file).

The package.json file (containing the dependencies) is the following one.

{
“name”: “api”,
“version”: “1.0.0”,
“description”: “”,
“main”: “index.js”,
“scripts”: {
“start”: “node index.js”
},
“author”: “”,
“license”: “”,
“dependencies”: {
“body-parser”: “^1.15.2”,
“express”: “^4.14.0”,
“mongodb”: “^2.2.16”
}
}

Also, make sure you have a running instance of MongoDB and the MONGODB_URL environment variable correctly set. For instance, if Mongo is running locally on the default port (27017), MONGODB_URL should be set to mongodb://localhost/todos (todos is the name of the db but we can use anything we want).

In a terminal, go into the folder containing the index.js and package.json file, and build the dependencies with the following command.

$ npm install$ export MONGODB_URL=mongodb://localhost/todos$ npm start

Let’s test the api by creating a new todo and making sure it can be retrieved.

$ curl localhost:1337/todos
{"todos":[]}
$ curl -XPOST -H 'Content-Type: application/json' -d '{"text":"coding"}' localhost:1337/todos
Created
$ curl localhost:1337/todos
{"todos":[{"_id":"586625ce750f15b01d8f6006","text":"coding"}]}

Everything seems good.

Dockerizing the API

Let’s create a simple Dockerfile based on mhart/alpine-node:6.9.2 image. Basically, the dependencies are compiled in node_modules and the sources are copied over to the /app folder.

FROM mhart/alpine-node:6.9.2# Copy list of dependencies
COPY package.json /tmp/package.json
# Install dependencies
RUN cd /tmp && npm install
# Copy dependencies libraries
RUN mkdir /app && cp -a /tmp/node_modules /app/
# Copy source code
COPY . /app
# Expose API port to the outside
EXPOSE 1337
# Change working directory
WORKDIR /app
# Launch application
CMD ["npm","start"]

Note: the package.json file is copied to a temporary folder where dependencies are compiled before they are copied to the /app folder. Then the sources are copied to /app. This order is important so it avoid to have the dependencies recompiled each time a change is done in the code.

Docker Compose file for our application

The api is ready and can be connected to a MongoDB instance. Let’s use the following docker-compose.yml file:

version: '2'
services:
db:
image: mongo:3.4
restart: always
api:
build: ./api
restart: always
ports:
- "1337:1337"

This file is very simple as it contains only 2 services, api and db.

  • api service is built from the code we defined above
  • db service is based on the mongo:3.4 official image from the Docker Hub

As we do not provide any MONGODB_URL environment variable, the api connects to the db service using db as the hostname.

Let’s run our Docker Compose application.

$ docker-compose up

From another terminal, let’s make sure it’s working fine.

$ curl localhost:1337/todos
{"todos":[]}
$ curl -XPOST -H 'Content-Type: application/json' -d '{"text":"coding"}' localhost:1337/todos
Created
$ curl localhost:1337/todos
{"todos":[{"_id":"58662aef0ebda60010cb5911","text":"coding"}]}

We can create and list todos, so everything is fine.

Adding Hashicorp’s Consul in the picture

Consul is a great piece of software developed by Hashicorp (the company behind Vagrant, Terraform, ,…). It is defined as a multiple components tool used for discovering and configuring services in an infrastructure. It provides several key features:

  • Service Discovery
  • Health Checking
  • Key/Value Store
  • Multi Datacenter

A Consul cluster is made up of several Consul agents, an agent being a daemon process which runs either as a server or as a client.

  • Consul servers are where data are stored and replicated. It maintains the cluster state and replies to client queries. A server exposes a DNS and HTTP endPoints.
  • A Consul client communicates with Consul servers. It declares services and discovers services declared by other clients. It also reports regular health check of services.

The picture below shows, at a very high level, the architecture of a Consul cluster running in HA across 2 data centers.

In our application, to keep thing simple, we’ll use:

  • one datacenter
  • one Consul server within this datacenter
  • one Consul client running in each of the application’s service

Of course this configuration is not suitable for a production environment where several Consul servers would be needed (3 , 5, …) to ensure no SPOF (Single Point Of Failure).

Moving our application into Autopilot

Docker Compose is the scheduler of our application. It’s in charge of scheduling the services onto the Docker hosts according to constraints, labels and it’s also the one responsible to setup volumes and networks. In our example, as we only consider one host, the scheduling is quite simple.

When it comes to the orchestration part, the Autopilot pattern moves all the orchestration responsibilities to the application itself rather than relying on an external orchestrator. ContainerPilot (helpers written in go by Joyent and available on Github) is used to help in the implementation of the Autopilot pattern.

ContainerPilot will be added to each of the application’s service. It communicates with the application’s service on one side and with Consul on the other side. It is in charge of:

  • the service registration within Consul
  • defining how the health check of the service needs to be done
  • defining the dependencies of the current service
  • the management of the service workflow by calling configuration script for preStart / onChange / postStop actions

The picture below gives a high level view of the interaction between the application, ContainerPilot and Consul.

As we’ll see in the following, ContainerPilot will run a process as PID 1 and fork/exec the service’s process.

Adding a Consul server service

Let’s first add a consul service in the docker-compose.yml file:

consul:
image: consul:0.7.2
restart: always
ports:
- 8500
dns:
- 127.0.0.1
command: agent -server -client=0.0.0.0 -bootstrap -ui

We specifies in the command that the agent is running as a server. The --bootstrap flag is used so this agent can be elected as a leader right away.

Consul management interface is then available on port 8500

Only Consul is visible as we have not yet defined any other services. We’ll come back to this interface later on.

Note: only one Consul server is used here. In order to setup a HA Consul Raft, the -bootstrap option should be changed to -bootstrap-expect 3

Setting ContainerPilot for the db service

In order to use ContainerPilot for our db service, we will create a new Docker image based on the MongoDB one, install Consul binaries and ContainerPilot binaries, and define a containerpilot.json configuration file.

We start by creating a Dockerfile that extends the mongo:3.4 official image.

FROM mongo:3.4

On top of it, we need to install curl and unzip as they are needed to install the other components.

RUN apt-get install -y curl unzip

We then install the Consul binaries.

# Install consul
RUN export CONSUL_VERSION=0.7.2 \
&& export CONSUL_CHECKSUM=aa97f4e5a552d986b2a36d48fdc3a4a909463e7de5f726f3c5a89b8a1be74a58 \
&& curl --retry 7 --fail -vo /tmp/consul.zip "https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip" \
&& echo "${CONSUL_CHECKSUM} /tmp/consul.zip" | sha256sum -c \
&& unzip /tmp/consul -d /usr/local/bin \
&& rm /tmp/consul.zip \
&& mkdir /config

A Consul agent could then be ran from within the db container, it will be in charge of declaring the db service and reports its health to the Consul servers. A Consul agent running as a client is a very light process and it should not add overhead on the service.

To define what the Consul client needs to report to the Consul server, we’ll use ContainerPilot binary and the following containerpilot.json configuration file:

{
"consul": "localhost:8500",
"services": [
{
"name": "db",
"health": 'mongo --eval "db.status"',
"poll": 3,
"ttl": 10,
}
],
"coprocesses": [
{
"command": ["/usr/local/bin/consul", "agent",
"-data-dir=/data",
"-config-dir=/config",
"-rejoin",
"-retry-join", "{{ if .CONSUL_HOST }}{{ .CONSUL_HOST }}{{ else }}consul{{ end }}",
"-retry-max", "10",
"-retry-interval", "10s"],
"restarts": "unlimited"
}
]
}

3 keys are specified in the above file:

  • consul: specify the consul agent to connect to. The local Consul agent is targeted as this is the one that will communicate with the Consul server.
  • services: defines the db service and the health check to report to Consul. In this example we use a simple call to db.status within a mongoshell to make sure the db is up and running.
  • coprocesses: runs the local Consul agent and ensure it can join the Consul server.

For this configuration file to be taken into account, we need to install the ContainerPilot binaries, adding the following piece of code in the Dockerfile right after the Consul installation (previous RUN command).

# Install ContainerPilot
ENV CONTAINERPILOT_VERSION 2.6.0
RUN export CP_SHA1=c1bcd137fadd26ca2998eec192d04c08f62beb1f \
&& curl -Lso /tmp/containerpilot.tar.gz \
"https://github.com/joyent/containerpilot/releases/download/${CONTAINERPILOT_VERSION}/containerpilot-${CONTAINERPILOT_VERSION}.tar.gz" \
&& echo "${CP_SHA1} /tmp/containerpilot.tar.gz" | sha1sum -c \
&& tar zxf /tmp/containerpilot.tar.gz -C /bin \
&& rm /tmp/containerpilot.tar.gz
# COPY ContainerPilot configuration
ENV CONTAINERPILOT_PATH=/etc/containerpilot.json
COPY containerpilot.json ${CONTAINERPILOT_PATH}
ENV CONTAINERPILOT=file://${CONTAINERPILOT_PATH}

Note: as CONSUL_HOST environment variable is used in containerpilot.json, we’ll need to make sure to set this one in the env part of the db service definition within the docker compose file.

The last instruction to enter in the Docker Compose file is the ENTRYPOINT to specify mongod (the process ran in the original mongo:3.4 image) is ran by the containerpilot binary.

ENTRYPOINT ["/bin/containerpilot", "mongod"]

The whole Dockerfile for the DB service is then:

FROM mongo:3.4RUN apt-get update -y && apt-get install -y curl unzip# Install consul
RUN export CONSUL_VERSION=0.7.2 \
&& export CONSUL_CHECKSUM=aa97f4e5a552d986b2a36d48fdc3a4a909463e7de5f726f3c5a89b8a1be74a58 \
&& curl --retry 7 --fail -vo /tmp/consul.zip "https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip" \
&& echo "${CONSUL_CHECKSUM} /tmp/consul.zip" | sha256sum -c \
&& unzip /tmp/consul -d /usr/local/bin \
&& rm /tmp/consul.zip \
&& mkdir /config
# Install ContainerPilot
ENV CONTAINERPILOT_VERSION 2.6.0
RUN export CP_SHA1=c1bcd137fadd26ca2998eec192d04c08f62beb1f \
&& curl -Lso /tmp/containerpilot.tar.gz \
"https://github.com/joyent/containerpilot/releases/download/${CONTAINERPILOT_VERSION}/containerpilot-${CONTAINERPILOT_VERSION}.tar.gz" \
&& echo "${CP_SHA1} /tmp/containerpilot.tar.gz" | sha1sum -c \
&& tar zxf /tmp/containerpilot.tar.gz -C /bin \
&& rm /tmp/containerpilot.tar.gz
# COPY ContainerPilot configuration
ENV CONTAINERPILOT_PATH=/etc/containerpilot.json
COPY containerpilot.json ${CONTAINERPILOT_PATH}
ENV CONTAINERPILOT=file://${CONTAINERPILOT_PATH}
ENTRYPOINT ["/bin/containerpilot", "mongod"]

Note: another solution would be to directly use autopilotpattern/mongodb image from the Docker Hub. In order to better understand what is happening behind the hood, it’s sometime good to do the thing from scratch.

Setting ContainerPilot for the api service

As we’ve done for the db service. We’ll modify the Dockerfile of the api service. We’ll still use the same mhart/alpine-node:6.9.2 as the base image and install curl and unzip as we’ve done for the DB service (not the same packet manager though as we are using an image based on alpine).

FROM mhart/alpine-node:6.9.2RUN apk update && apk add curl unzip

We then install the Consul binaries as we’ve done before.

# Install consul
RUN export CONSUL_VERSION=0.7.2 \
&& export CONSUL_CHECKSUM=aa97f4e5a552d986b2a36d48fdc3a4a909463e7de5f726f3c5a89b8a1be74a58 \
&& curl --retry 7 --fail -vo /tmp/consul.zip "https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip" \
&& echo "${CONSUL_CHECKSUM} /tmp/consul.zip" | sha256sum -c \
&& unzip /tmp/consul -d /usr/local/bin \
&& rm /tmp/consul.zip \
&& mkdir /config

A Consul agent could then be ran from within the api container, it will be in charge of declaring the api service and reports its health to the Consul servers.

To define what the Consul client needs to report to the Consul server, we’ll use ContainerPilot binary and the following containerpilot.json configuration file. This one is different from the file defined for the DB service and uses an additional backends key.

{
"consul": "localhost:8500",
"prestart": "/app/prestart.sh",
"services": [
{
"name": "api",
"health": "/usr/bin/curl -o /dev/null --fail -s http://localhost:1337/message",
"poll": 3,
"ttl": 10,
"port": 1337
}
],
"coprocesses": [
{
"command": ["/usr/local/bin/consul", "agent",
"-data-dir=/data",
"-config-dir=/config",
"-rejoin",
"-retry-join", "{{ if .CONSUL_HOST }}{{ .CONSUL_HOST }}{{ else }}consul{{ end }}",
"-retry-max", "10",
"-retry-interval", "10s"],
"restarts": "unlimited"
}
],
"backends": [
{
"name": "db",
"poll": 3,
"onChange": "echo send signal for containerpilot to reload its configuration"
}
]
}

5 keys are specified in the above file:

  • consul: specify the consul agent to connect to. As for the db service, the local Consul agent is targeted.
  • prestart: specify a script to run before the service is started. This script is used to make sure the db service is up and running first. The prestart.sh is defined below.
  • services: defines the api service and the health check to report to Consul. In this example we assume a successful GET to /message is enough for the service to be considered healthy.
  • coprocesses: runs the local Consul agent and ensure it can join the Consul server.
  • backends: specifies the list of service the api depends on, only DB in our example. We do not specify a real value / command for the onChange key yet but what is usually done at this level is to send a SIGHUP event when a backend changes so ContainerPilot can reload its configuration.

For this configuration file to be taken into account, we need to install the ContainerPilot binaries, adding the following piece of code in the Dockerfile right after the Consul installation (previous RUN command).

# Install ContainerPilot
ENV CONTAINERPILOT_VERSION 2.6.0
RUN export CP_SHA1=c1bcd137fadd26ca2998eec192d04c08f62beb1f \
&& curl -Lso /tmp/containerpilot.tar.gz \
"https://github.com/joyent/containerpilot/releases/download/${CONTAINERPILOT_VERSION}/containerpilot-${CONTAINERPILOT_VERSION}.tar.gz" \
&& echo "${CP_SHA1} /tmp/containerpilot.tar.gz" | sha1sum -c \
&& tar zxf /tmp/containerpilot.tar.gz -C /bin \
&& rm /tmp/containerpilot.tar.gz

# COPY ContainerPilot configuration
ENV CONTAINERPILOT_PATH=/etc/containerpilot.json
COPY containerpilot.json ${CONTAINERPILOT_PATH}
ENV CONTAINERPILOT=file://${CONTAINERPILOT_PATH}

Note: as CONSUL_HOST environment variable is used in containerpilot.json, we’ll need to make sure to set this one in the env part of the API service definition within the docker compose file.

The whole Dockerfile for the API service is then:

FROM mhart/alpine-node:6.9.2RUN apk update && apk add curl unzip# Install consul
RUN export CONSUL_VERSION=0.7.2 \
&& export CONSUL_CHECKSUM=aa97f4e5a552d986b2a36d48fdc3a4a909463e7de5f726f3c5a89b8a1be74a58 \
&& curl — retry 7 — fail -vo /tmp/consul.zip “https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip" \
&& echo “${CONSUL_CHECKSUM} /tmp/consul.zip” | sha256sum -c \
&& unzip /tmp/consul -d /usr/local/bin \
&& rm /tmp/consul.zip \
&& mkdir /config
# Install ContainerPilot
ENV CONTAINERPILOT_VERSION 2.6.0
RUN export CP_SHA1=c1bcd137fadd26ca2998eec192d04c08f62beb1f \
&& curl -Lso /tmp/containerpilot.tar.gz \
"https://github.com/joyent/containerpilot/releases/download/${CONTAINERPILOT_VERSION}/containerpilot-${CONTAINERPILOT_VERSION}.tar.gz" \
&& echo "${CP_SHA1} /tmp/containerpilot.tar.gz" | sha1sum -c \
&& tar zxf /tmp/containerpilot.tar.gz -C /bin \
&& rm /tmp/containerpilot.tar.gz
# Copy list of dependencies
COPY package.json /tmp/package.json
# Install dependencies
RUN cd /tmp && npm install
# Copy dependencies libraries
RUN mkdir /app && cp -a /tmp/node_modules /app/
# Copy source code
COPY . /app
# COPY ContainerPilot configuration
ENV CONTAINERPILOT_PATH=/etc/containerpilot.json
COPY containerpilot.json ${CONTAINERPILOT_PATH}
ENV CONTAINERPILOT=file://${CONTAINERPILOT_PATH}
# Expose API port to the outside
EXPOSE 1337
# Change working directory
WORKDIR /app
ENTRYPOINT ["/bin/containerpilot"]# Launch application
CMD ["npm","start"]

The prestart.sh file used to make sure the db is up and running is the following one (it’s created in the root folder of the api and is copied to the /app folder when creating the image).

#!/bin/shwhile [[ "$(curl -s http://${CONSUL_HOST}:8500/v1/health/service/db | grep passing)" = ""]]
do
echo "db is not yet healthly..."
sleep 5
done
echo "db is healthly, moving on..."

Note: the part relative to the CONTAINERPILOT_PATH has been moved after the dependencies compilation so changes in the containerpilot.json file does not invalidate the cache.

Docker Compose file for the application

With consul service added to the picture and the Dockerfile of the api and db services modified, the docker-compose.yml is the following one:

version: '2'
services:
consul:
image: consul:0.7.2
restart: always
dns:
— 127.0.0.1
ports:
— "8500:8500"
command: agent -server -client=0.0.0.0 -bootstrap -ui
db:
build: ./db
restart: always
environment:
— CONSUL_HOST=consul
api:
build: ./api
restart: always
environment:
— CONSUL_HOST=consul
ports:
— "1337:1337"

Notes:

  • we are using build instruction instead of image in the db and api services as we have not created and published the images beforehand. If we were to run this application on another host we would first need to build the images and make them available through a registry
  • both db and api uses the CONSUL_HOST environment variable. This is needed for the local consul agent to connect to the consul server.

Let’s run the application and give a look to the api’s logs

$ docker-compose up

Basically, when the api started, several actions tooks place:

  • run of prestart.sh to check if the db is available
  • launch the Consul agent running as a co-process of the api
  • launch of the api

After a couple of seconds, our 3 services appear in the Consul’s ui.

For both api and db service, there are 2 checks in the passing state:

  • The first one is relative to the Serf library used for the gossip protocol that ensures the communication between the Consul agents
  • The second one is relative to the health check of the service

Restarting the db service

It’s good to see the services are running fine but the interesting thing is to see how the application reacts to events such as the db going down and then going up again after a couple of minutes.

Let’s stop the db with the following command

$ docker-compose stop db

A couple of seconds later, the db service is not more listed in the Consul ui. The api shows it’s not healthy, this is normal as the health check command we specified cannot be successfully ran (data cannot be retrieved from the db anymore).

Note: the Serf Health Status of the api is still passing as the Consul agent running as the api co-process is not impacted by the fact the db is down.

Let’s restart the db, with the following command, and see how the application reacts.

$ docker-compose stop db

After a couple of seconds the db service is registered again in Consul, but the health check of the api is still failing.

In the current configuration, this is normal as the api has lost the connection to the db and it does not automatically try to reconnect as we have not configured anything for this to happen.

Automates the reconnection to the db service

When the db service goes up again, we need to setup a mechanism that makes the api to automatically try to reconnect.

The first thing we’ll do is to modify the containerpilot.json file of the api so a SIGHUP signal is sent (triggering a reload of the ContainerPilot configuration) when an event occurs at the db level.

"backends”: [
{
"name": "db",
"poll": 3,
"onChange": "pkill -SIGHUP node"
}
]

Using piloted npm module, we can catch the reload through the refresh method and trigger the reconnection to the db. Below is the updated index.js file.

const express = require(‘express’),
bodyParser = require(‘body-parser’),
MongoClient = require(‘mongodb’).MongoClient,
Piloted = require(‘piloted’),
ContainerPilot = require(‘./containerpilot.json’),
app = express(),
url = process.env.MONGODB_URL || ‘mongodb://db/todos’;
// Configure app to use bodyParser
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
// Connection to database
function connection(url, callback){
MongoClient.connect(url, (err, conn) => {
if(err){
return callback(err);
} else {
return callback(null, conn);
}
});
}
// Connection to db on startup
let db = null;
Piloted.config(ContainerPilot, (err) => {
connection(url, function(err, conn){
db = conn;
});

// Reconnect when ContainerPilot reloads its configuration
Piloted.on(‘refresh’, () => {
connection(url, function(err, conn){
db = conn;
});
});
});
// Define routes
...

With those changes, any db event will trigger a reconnection to the db. Below are the screenshots of the Consul ui corresponding to the following steps:

  • Docker Compose application started. The Consul, api and db services are up and running
  • db services stopped. Consul is still running fine but the api health-check fails
  • db service restarted. The api automatically reconnected.

Note: a first draft of this article was done using sailsjs to create the api. sailsjs is now quite a big framework and hides a lot of stuff to us, the connection to the database is one of them, so it’s hard and probably a not so good idea to modify it adding piloted into sails’ code. I have not found yet the best solution to reconnect to the db for such framework.

Conclusion

This article is an introduction to the Autopilot pattern / ContainerPilot approach that enables an application to embed its own orchestration responsibilities. The full code is available on GitHub.

In a following article, we’ll add an Nginx reverse proxy and a Reactjs front-end to the application. We’ll then see how the whole application can be scaled.

I’d love to hear your though on this topic. Please do not hesitate to give some feedback.

--

--

Luc Juggery
@lucjuggery

Docker & Kubernetes trainer (CKA / CKAD), 中文学生, Learning&Sharing