Continuous deployment w/ Docker, AWS and circle-ci

soosap
soosap
Published in
11 min readAug 13, 2016

--

So there you go, you have worked your ass off to build that awesome web application on your local development machine, but when it eventually was ready to be shared w/ the world by making it publicly available on your domain, you kept running against a wall. Been there, done that.

Where shall you deploy your code, and more importantly HOW shall you deploy it? I first got quite frustrated in finding answers to these questions. The solution I propose might not be the best one out there. That said, I think it is pretty solid approach to benefit from modern DevOps concepts w/o being a system administrator. Basically it gives startups and small teams the capability to exploit most benefits of agile software development at a time when most cannot necessarily afford a dedicated IT infrastructure.

There are a couple of platforms out there that make the process of deploying your app in the cloud easy. The tradeoff for less complexity is paying more money and giving up flexibility in the deployment process. I tried a bunch of these platforms and while I was up and running in no time, I was worried that I needed to pay at least 10USD/month for every single project that I wanted to host — on virtual machines (VMs) equipped w/ mere minimum requirements. If you wanted to host a couple of hobby projects you quickly have monthly recurring bills of 50USD/month upwards. That’s annoying from a bootstrapping perspective, so while exploring more viable options I encountered containerization w/ Docker. I will walk you through how I use Docker to host any number of web applications on a single virtual machine as opposed to deploying one virtual machine per project. Thus, we tremendously gain in terms of cost efficiency while not giving up any flexibility whatsoever on how we like to structure the deployment workflow.

We host our application(s) on an AWS EC2 instance. It is a very robust, scalable and inexpensive virtual machine in the cloud. But hey, there are a number of other cloud computing platforms out there that you can use and I will point out how exactly you could replace AWS for some other provider.

docker-machine

If you are new to Docker, I suggest you to acquire the basics w/ courses by Dan Wahlin (Docker for Web Developers) and/or Nigel Poulton (Docker Deep Dive). We use docker-machine to create a virtual machine on our preferred cloud computing platform.

After installing docker-machine you can issue the following commands to create a virtual machine on AWS which comes w/ Docker pre-installed. I use the AWS credentials file to store the aws_access_key_id and the aws_secret_access_key so that I do not have to specify them on my cli commands. At the very minimum we need to specify the following.

$ docker-machine create --driver amazonec2 hobbyprojects

Personally, I use an AWS VPC and want to explicitly specify in which subnet I want to launch the virtual machine which I need to specify like this:

$ docker-machine create hobbyprojects
--driver amazonec2
--amazonec2-region eu-central-1
--amazonec2-vpc-id vpc-a3XX6exa
--amazonec2-subnet-id subnet-3X1aX6X2
--amazonec2-zone a

If you want to use another provider than AWS, find the available drivers here. Just follow along and continue reading when you are up and running.

Now, have a look at ~/.docker/machine/machines/hobbyprojects. That’s the place where all the secret keys required by docker-machine to ssh into the virtual machine are stored. docker-machine assumes the role of a particular virtual machine by setting a number of environment variables. You set the environment variables that belong to the hobbyprojects virtual machine by issuing the following command:

$ eval $(docker-machine env hobbyprojects)

Thereafter, you can directly use the AWS-hosted hobbyprojects virtual machine’s docker client to create, load, and manage containers. For example we run the dockercloud/hello-world container and access its content on the hobbyprojects virtual machine’s IP.

$ docker run -d -p 3000:80 dockercloud/hello-world$ docker-machine ip hobbyprojects$ curl $(docker-machine ip hobbyprojects):3000

circle-ci

We use circle-ci, a continuous integration platform, to automate the entire process of deploying our web applications. All we need to do in order to add a new project in circle-ci is hitting the “Build project” button after identifying it. Bammm.

Adding a GitHub repo to circle-ci

Now, by simply pushing changes in our code base to GitHub, we automatically invoke a circle-ci build process. What does that mean?

It means that circle-ci provides us with a virtual machine that we can use to checkout the repo and execute all necessary commands to build, test, package and deliver our application whenever new commits are pushed to the linked GitHub repo.

We are in control of that virtual machine, i.e. the circle-ci build machine, for the time of the build process only. It is created on the fly and will be teared down once the build process has been completed. Quite a few of the associated build steps are inferred from circle-ci by merely analyzing our code base. For instance, if we have test files in our repo circle-ci will attempt to run them for us out of the box. By creating a circle.yml configuration file in the root of our project we have the ability to more explicitly specify the individual build steps. We can either complement the automatic inference by defining additional commands to be executed before or after those put together by circle-ci or we can just override them.

Now that we know how we can use docker-machine on our physical laptop to run, load and manage containers on our cloud-hosted hobbyprojects VM — in the next step, we need to find a way to achieve the very same thing, yet, by using another computer… jap you guessed it, the circle-ci build machine.

Configuring the circle-ci build machine

Whenever the circle-ci build machine is created out of the fly it is in a naked state. We first have to configure it such that it is capable of executing Docker commands and further that it is capable of hooking into our AWS-hosted hobbyprojects VM. The former is necessary to be able to build container images and moreover to push container images onto Docker Hub, a registry service for storing Docker images in the cloud. The latter is necessary to be able to stop and run Docker containers and also to download Docker images from Docker Hub — not on the circle-ci build machine itself but rather on our hobbyprojects VM accessed through the circle-ci build machine.

circle-ci build machine configuration steps

Next, we discuss how the individual configuration steps are handled inside the circle.yml file.

Install Docker Engine: Normally, we can simply specify “docker” under services in the machine section. If you require a newer version of Docker, however, than the default one provided by circle-ci, there are workarounds. In my case, I execute the install-circleci-docker.sh shell script in the pre stage to upgrade to version 1.10. Check out the circle-ci discussion board to find such workarounds if you need them.

machine:
pre:
- curl -sSL https://s3.amazonaws.com/circle-downloads/install-circleci-docker.sh | bash -s -- 1.10.0
services:
- docker

Install Docker Compose: We install docker-compose using pip. Therefore, we need to install python as well. Once the machine commands have been executed the commands in the dependencies section will follow. Here, you define commands on which the test and deployment sections are dependent upon. That’s where we exert the pip command to install docker-compose. I use the override domain to fully control the behaviour in that section not leaving any room for circle-ci’s automatic reference to kick in.

machine
python:
version:
2.7.11
dependencies:
override:
- pip install docker-compose

Provide Environment Variables: In order to access environment variables in our circle.yml file, we need to provide them to circle-ci somewhere, i.e. Settings > Build Settings > Environment Variables. We will discuss shortly which environment variables we need and why.

Adding environment variables to circle-ci

Provide “hobbyprojects”-docker-machine certs: Recall that a number of cert files were stored under ~/.docker/machine/machines/hobbyprojects when we created the hobbyprojects docker-machine. These cert files are required in order to access the hobbyprojects VM. Thus we also need a way to provide those cert files to the circle-ci build machine. However, that’s not so easy to accomplish as you cannot simply upload cert files onto the machine. The only workaround I found feels somewhat like a hack, but it works. If you have a more elegant solution to achieve this please let me know in the comments. Ok, the solution is to take out the secret sauce from the cert files and store it into corresponding environment variables such as DOCKER_CAPEM, DOCKER_CERTPEM, DOCKER_IDRSA. Then in the deployment section of the circle.yml configuration file you exert a number of Linux commands aimed at reconstructing the relevant cert files when you need them to connect to hobbyprojects. I provide an example in the next section.

Continuous deployment workflow

Now that we have pimped our circle-ci build machine and have configured it according to our needs, we can finally move on to our continuous deployment workflow or logic.

Continuous deployment workflow

Step 1: Happens automatically when you trigger a new build. Your application code base will be available on the circle-ci build machine via:

cd ~/NAME_OF_GITHUB_REPO

Step 2: We build the docker image in circle.yml’s dependencies section such that it is available to the test and deployment sections when needed. If your build requires environment variables that are not specified in the underlying Dockerfile, you can specify them here using build-args.

dependencies:
override:

- docker build
--build-arg FACEBOOK_APP_ID=$FACEBOOK_APP_ID
-t soosap/me .

Step 3: We override the test section to run our dedicated npm test script to execute our tests suite. We set the TERM environment variable to obtain colored output in the circle-ci dashboard. The last bit is important! In case our tests fail we need to tell circle-ci to stop and not move on w/ deployment. Therefore, we need to make sure that we throw the status code after executing the test command like that.

test:
override:
- docker run -it soosap/me env TERM=xterm bash -c "npm run test" && echo $?

Step 4: Make sure you add the necessary environment variables to circle-ci accordingly.

dependencies:
override:
- docker login -e $DOCKER_HUB_EMAIL -u $DOCKER_HUB_USERNAME -p $DOCKER_HUB_PASSWORD

Step 5: You have the opportunity to specify multiple environments like production and integration that both watch and checkout code from a separate GitHub branch. When we make it to circle.yml’s deployment section, we then check which GitHub branch has triggered the build. When there is a matching deployment environment, the commands and only the commands under that respective section will be executed. That gives us the opportunity to distinguish between a production-level VM and an integration/staging-level VM to separate concerns.

deployment:
production:
branch:
master
commands:
- docker push soosap/me:latest

Step 6: We use Linux commands to reconstruct the cert files by transforming the data that we have stored in environment variables. Once we have reconstructed the cert files we store them inside the $DOCKER_CERT_PATH, which in our case is just the default location:

~/.docker/machine/machines/hobbyprojects

deployment:
production:
branch:
master
commands:
- echo '-----BEGIN CERTIFICATE-----' >> ca.pem
&& echo $DOCKER_PROD_CAPEM | sed -e 's/\s\+/\n/g' >> ca.pem
&& echo '-----END CERTIFICATE-----' >> ca.pem
&& touch $DOCKER_CERT_PATH/ca.pem
&& mv ca.pem $DOCKER_CERT_PATH/ca.pem

What tha heck is that? We use the Linux echo command to create text documents on the circle-ci build machine. The >> creates a document if there is not one available under that name or else it appends the additional content at the end of an existing document. Via && we chain a number of Linux commands to be executed in one go in order to keep the output on the circle-ci dashboard less cluttered. The pipe | symbol is used to pass the output of the expression that is located before the pipe as an input into the command that is located after the pipe. The part containing the pipe symbol reads the content of the environment variable $DOCKER_INT_CAPEM and replaces all whitespace characters with a new line. Why is that necessary? The problem is that we can only store string values as environment variables. So when I try to read the contents of a cert file using cat and pbcopy Linux commands…

$ cd ~/.docker/machine/machines/hobbyprojects
$ cat ca.pem | pbcopy
Anatomy of a cert file

…and paste the output into the “New environment variable”-form in circle-ci, all newline characters are converted into whitespace characters. Therefore we need to reverse the process when we try to reconstruct these files. When specifying the environment variables we further strip the beginning and ending lines to make our circle.yml reconstruction commands simpler. To be honest, I don’t know much about Linux, so if there is some guru who can achieve all of that in a one liner, please let me know in the comments and I update the post.

Form to add new environment variables in circle-ci

Step 7: We set a bunch of environment variables to assume control of a particular Remote Docker Host. Note, that this is equivalent to using the following command. The victory is that we do not need to install docker-machine on our circle-build machine. It’s more a convenience tool.
$ eval $(docker-machine env hobbyprojects)

deployment:
production:
branch:
master
commands:
- echo export DOCKER_TLS_VERIFY=$(echo $DOCKER_TLS_VERIFY_PROD) >> ~/.circlerc

- echo export DOCKER_HOST=$(echo $DOCKER_HOST_PROD) >> ~/.circlerc

- echo export DOCKER_CERT_PATH=$(echo $DOCKER_CERT_PATH_PROD) >> ~/.circlerc

Step 8: Pull the updated brandnew docker image.

deployment:
production:
branch:
master
commands:
- docker pull soosap/me:latest

Step 9: We simply git clone a docker-compose file from a separately managed git repository because docker-compose is awesome. Alternatively, you can write out individual docker run and stop commands to achieve the same thing.

deployment:
production:
branch:
master
commands:
- git clone git@github.com:soosap/hobbyprojects-compose.git

Step 10: Recreate all docker containers that have been updated. Further, we need to clean up after ourselves such that your valuable cloud storage is not littered with docker images that you need no more.

deployment:
production:
branch:
master
commands:
- docker-compose -f hobbyprojects-compose/production/docker-compose.yml up -d
- docker images --no-trunc | grep none | awk '{print $3}' | xargs docker rmi

Here you go, 1-Click-Deploy happily always ever after!

--

--

soosap
soosap

React, GraphQL and Kubernetes enthusiast. Tech Blogger @ https://soosap.co