Simple continuous deployment with docker compose, docker machine and Gitlab CI

Lars de Ridder
6 min readJun 26, 2017

For local development of microservice-based systems running on docker, we’ve found that docker compose is probably the best way to go for local development, with the docker compose yaml file format being very usable for configuration as well. And for some projects, there really is no need to scale up to having multiple containers of a service, as you’ll be just fine with running all your containers on a single host. For these projects you want to get to production as smooth (or simple) as possible.

So after spending time learning about Mesos, Kubernetes, Amazon ECS and other proprietary technologies and learning a ton of new concepts, I concluded that they’re all awesome, but not really suitable for a simple move from local development with docker compose. They all have their own configuration formats (largely for good reasons) and all of them do orchestration quite a bit different than docker compose, to facilitate more complex deployment environments.

Now you can of course just set up Rancher and use that. Rancher uses the docker compose file format and is quite powerful. However, my secondary goal was to do as little self-hosted as I can, because I’m lazy and don’t like system administration. So, let’s just use docker compose instead!

Prerequisites

To follow this article, make sure you’ve installed:

And that you have:

  • A project set up to use docker-compose on Gitlab.

Enter docker machine

From the docs:

Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like AWS or Digital Ocean.

So let’s get started setting up a machine. I went for Amazon EC2 (but there are a bunch of drivers):

  • Set up the AWS CLI and run aws configure
  • Provision your new machine:docker-machine create -d amazonec2 –amazonec2-region eu-west-1 –amazonec2-instance-type “t2.micro” docker-compose-test (note: You might have to specify your AWS zone at some point).

This last command takes a little while, but that’s really it. You now have a machine running docker! Let’s immediately do some deploying:

  • Do eval $(docker-machine env docker-compose-test)
  • Test if it works (run docker ps and/or docker version)
  • Go to your project with the docker compose
  • Run docker-compose up -d

Now that was fast, wasn’t it? You can see the running containers using docker ps, and you can find the remote ip address using docker-machine ip. To stop everything, run docker-compose down.

Probably you won’t be able to access your machine over HTTP yet, as that normally requires some configuration. In the case of Amazon, you need to add a security group to the instance that opens port 80. I’m sure you’ll manage to set this up.

Docker machine on other clients

If you work on your own, you can skip to the next heading. If you work in a team and would like to give someone else access to the machine as well, read on.

Let’s start with the bad news: Docker machine has no docker-machine add, and it doesn’t seem like it will be supported either. The “official” way to add an already running machine is by creating a new machine using the “generic” driver. Follow along now:

  • Add the SSH key of the new client to the instance.
  • Cast this spell in your terminal: docker-machine create –driver generic –generic-ip-address=<INSTANCE_IP> –generic-ssh-key <PATH_TO_PUBKEY> –generic-ssh-user ubuntu <MACHINE_NAME>

That should do it. Note that this restarts your docker daemon. Why you ask? Because according to the devs, you shouldn’t have this use case anyway.

Continuous deployment

This is great. We can deploy using docker compose and add new clients. Party on and such.

To get continuous deployment working, your Continuous Integration server needs to get some kind of access to the instance on which your project should be deployed. What you could do is perform the above trick with docker machine, but because this is not really officially supported, because it restarts the docker daemon, and because it needs full SSH access, it is not ideal.

The better approach is to set up your CI server to connect to the docker daemon that docker machine set up for you over HTTPS using TLS. So that’s what we’re going to do.

We need some certificates to allow CI to connect to the daemon. If you’re a TLS certificate wizard, you’ll whip out your openssl wand and make secure dreams come true. If you’re a mere mortal like me, I’ve tried to save you a few hours (by wasting a few): https://github.com/XIThing/generate-docker-client-certs/. Clone that repo and run:

  • ./generate-client-certs.sh ~/.docker/machine/certs/ca.pem ~/.docker/machine/certs/ca-key.pem

This generates a key and a self-signed certificate which, together with the certificate authority certificate that docker machine generated for you (ca.pem), form the triforce that allows you to connect to the docker daemon.

Note: TLS is quite an advanced topic, and I’m kind of glossing over it here. I liked this article about Docker Swarm for a little more details.

Integration with Gitlab CI

We use the hosted version of Gitlab and use their shared runners in its CI setup, because as stated above, I don’t like self hosting. But feel free to use your own GItlab CI runner or another CI platform, as the approach should work on anything really.

Now to set things up in Gitlab CI:

  • Make sure shared runners are enabled. You can also use your own runner of course.
  • Add the client-key.pem, client-cert.pem and ca.pem as secret variables to Gitlab. In the below example, I’ve used the names CA, CLIENT_CERT and CLIENT_KEY.
  • In the root of your project (hosted on Gitlab), create a .gitlab-ci.yml file, with content like this:
stages:
- deploy
deploy:
image: jonaskello/docker-and-compose:1.12.1-1.8.0
stage: deploy
script:
- mkdir $DOCKER_CERT_PATH
- echo "$CA" > $DOCKER_CERT_PATH/ca.pem
- echo "$CLIENT_CERT" > $DOCKER_CERT_PATH/cert.pem
- echo "$CLIENT_KEY" > $DOCKER_CERT_PATH/key.pem
- docker-compose build
- docker-compose down
- docker-compose up -d --force-recreate
- rm -rf $DOCKER_CERT_PATH
only:
- master
variables:
DOCKER_TLS_VERIFY: "1"
DOCKER_HOST: "tcp://[YOUR_INSTANCE_IP]:2376"
DOCKER_CERT_PATH: "certs"
tags:
- docker
  • Push to your repo’s master branch
  • Observe continuous deployment.

You will probably want to split up the build and deploy jobs, and push the images to a registry after the build. As container registry you can use Gitlab’s own, just keep in mind that they currently have a limitation of a single container name for your project, such as xithing/awesome-app; if you use docker compose, you probably have multiple apps you want to build. A workaround can be to use the tag to specify the app, such as xithing/awesome-app:nginx.

Conclusion

So there you have it. The above workflow works quite well for relatively simple applications. When it gets more complicated however, you should have a look at kubernetes (which runs on GCE), as it is quite nice. And if you’re not scared of self hosting your orchestration, Rancher and Docker Swarm are pretty cool as well.

Links

If you’re going to set up the above, these links will probably be useful for you.

Edit: Docker-compose image

I was setting up a new project using the same method as described above, and I ran into the issue that the image I recommended in this article isn’t really kept up to date, which is inconvenient if you want to use some of the newer features of docker-compose. Instead of having to depend on a third party image, I now simply use the official docker image and install docker-compose myself through pip, as follows:

stages:
- deploy
services:
- docker:dind
image: dockerbefore_script:
- apk add --update py-pip
- pip install docker-compose
deploy:
<Same stuff as before, minus the image>

Admittedly, this means you are now stuck with the latest docker-compose version available in pip (which you can of course pin), but at least that one is kept relatively up to date.

--

--

Lars de Ridder

Problem solver, Requirements Engineer, System Designer, Founder of XIThing.io and Software Engineer of all kinds of stacks