CI/CD automation using only Docker Cloud

Full automation with Docker Cloud

Witei Engineering Team
Our developer stories
8 min readApr 5, 2017

--

Disclaimer

We no longer use Docker Cloud for cluster management, node provision or applications since the service was shut down May 21. Now Docker Cloud only offers image registry and automated builds (pretty much what Docker Hub does already).

Update

Read about how we moved our Kubernetes cluster after our managed K8S provider no longer supported our cloud provider.

TL;DR

We managed to automate our project 100% in less than 24h. Wanna know how? You’re gonna have to read the long version 8)

Before Docker Cloud

We started out with a Bitbucket repo containing a Django project, and a Digital Ocean Droplet running this project through Docker Compose. We built our docker images locally and pushed them to Docker Hub. Then from the server, we pulled the new images and restarted the docker-compose.

Final Work Flow

Coming from the premise, the difference is we’ve set Docker Cloud between our code repo and our server to automate (and improve) the manual work we were doing.

We’ve ended up with two possible triggers for continuous integration and deployment. The first is on creating a new pull request, in which our tests are run to see if it’s mergeable. The second is on a git push from our master branch, in which the tests are run, and if they pass, the images are built and the service deployed.

Automated Work Flow

Services Used

We’ve managed to build this structure with just 4 services! But it can easily be reduced to 3.

A whale shark, because Docker, and Digital Ocean

What/ Why Docker Cloud

So, what’s this Docker Cloud thing? Straight up from the docs:

Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images.

Which is exactly the everything else we were looking for.

While we went all-in with Docker Cloud, you can choose to only use part of its services. For instance, you can choose to just do the first step, and simply build and push your images when receiving a trigger, then use a different service to run your tests and deploy.

There are alternatives that offer similar services, if you’re comparing these, make sure the information is up to date. At the time, parts of Docker Cloud are in beta (e.g: Docker Cloud Build), and several comparison tables are outdated (many don’t show image caching as possible).

All in all, Docker Cloud seemed the most straightforward to us, allowing to directly deploy new Digital Ocean Droplets without complex configuration, and showing the test status and results directly in BitBucket.

Connecting Docker Cloud to Bitbucket and Digital Ocean

Just to get this out of the way, lets talk about setting the external services (code repo and server). Feel free to skip ahead to the cool stuff, this will be mentioned below anyway.

Yes really, it’s fine

From the docker cloud dashboard, go to the Cloud Settings on the left sidebar.

Source Provider

Link the source provider where your code is, it doesn’t matter if it’s a public or private repo. At the time, you can link either to Bitbucket or GitHub.

Cloud Providers

Optionally choose the service to host your server. You can use an existing machine to link to Docker Cloud, but I heavily recommend linking a cloud provider and creating a new machine instead. There are several services available, here’s how to link Digital Ocean to Docker Cloud.

How do I set this thing up?

We’ll be looking at three steps:

  1. Build: define our build trigger, specifying our image sources and setting autotests
  2. Applications: create a stack of services to deploy, and setting autoredeploy
  3. Infrastructure: create a node cluster

Build

Navigate to Build -> Repositories in Docker Cloud. From here we’ll be configuring the automatic build of the images and the automatic testing that occur before pushing to the image registry.

Container Image Repository

To start, you must have your docker image repos hosted somewhere. In our case, that’s Docker Hub. You can choose to import them or create new ones if not. Make sure you have an image pushed to this repository when you’re done.

Docker image repositories for wsgi and nginx

Automated Repository Build

For each repository, enter its detail view and navigate to Builds, then Configure Automatic Builds. For this step, you must have linked a Source Provider in Cloud Settings (go back up if you skipped this).

  • Source Repository: Select source code repository and project.
  • Build Location: Choose the node used to build the image. Right now only Docker Cloud nodes are available.
  • Autotest: Choose whether or not to autotest. We’ll cover this in the next point.
  • Build Rules: Define the trigger and docker image. We’re using a push to master.
  • Build Environment Variables: Environment variables needed at build time, not to be confused with the environment variables defined in the dockerfile.
Build configuration for wsgi repository

Automated Tests on Build

On the build configuration you are given automated testing possibilities:

  • Don’t do autotests
  • Only do autotests for internal PRs
  • Do autotests for internal and external PRs

Whenever one of the last two options are selected, the defined tests will be run before validating a build. The tests are triggered when creating a PR for the linked source project, or when the build rule is met (push in master branch in our example).

For tests to work, we need to define a docker-compose.test.yml (it must be named so).

This file has a structure similar to the docker-cloud.yml (we’ll cover that below) and docker-compose.yml. What’s really necessary is that you define a sut (System Under Test) service, which will run the tests. In this case, the sut service builds a test dockerfile which runs a bash script that checks that our linked wsgi service returns a 200 http code.

Seeing it in action

Up to this point, we’ve setup the automatic building and testing of our docker images. To test it, simply trigger the build by pushing a commit to master (or whatever other trigger you’ve defined). You’ll be able to follow the build from the repository Builds tab, if all goes well, you’ll se a nice green SUCCESS. If not make sure you check the build logs.

Applications

Navigate to Applications -> Stacks in Docker Cloud. We’re going to make a stack defining our services.

Automated Deployment

To automate deployment, we first need to create a stack (a collection of services) that we actually want to deploy.

This docker-cloud.yml is practically identical to our docker-compose.yml, just including the services definition. There are some incompatibilities though, so make sure you take a look at the docs. We’re defining two services: nginx for reverse proxy, and wsgi with our Django project.

Once you’ve created this stack, each service defined will have been created too (you can check in Services).

Docker Magic
Created services from stack file

Now, for each of the services, enter its detail view and edit the general settings. You’ll see options to Autorestart, Autodestroy, and Autoredeploy, make sure Autoredeploy is enabled.

Docker Cloud service options

Infrastructre

Navigate to Infrastructure-> Node Clusters in Docker Cloud. We’re going to define a node cluster to be able to deploy the stack we’ve just defined.

If you haven’t linked Docker Cloud to a Cloud Service provider you should do that now. Alternatively, you can choose to bring your own node, which means you can use an existing host. This option requires you to install the Docker Cloud Agent in your node and is a bit limited at the time. We initially attempted to use our own host, which had a running docker-compose, but after breaking it a couple of times we opted for deploying a shiny new machine.

Nodes

Create a new Node Cluster with the wizard, this step is pretty straight forward, simple fill the fields available for your provider. For Digital Ocean this means choosing the region and type/size, the disk size is directly linked to the type and size of the node in this case. (We’re going with the smallest droplet in this example, but there are many others available).

Docker Cloud Node Cluster Wizard

Once created in Docker Cloud, make sure the node exists in your selected provider (getting it up might take a few minutes).

There’s our droplet!

You can also follow up from the Node Cluster Timeline.

tusitio Node Cluster Timeline

Deployment

Now that it’s all properly configured, all you have to do is deploy the stack!

Don’t be afraid, just hit it, we’ve got automated tests to stop it if anything goes wrong ;)

Downtime

Using autoredeploy with our current stack, we have virtually no downtime. There are various approaches you can take if you’re looking for zero downtime though, like using blue-green deployment.

More to Docker Cloud

We’ve covered a simple case in this article, but there are options to handle more complex structures. For instance, you can set up a hook to run a custom command between build phases to collect a component that is not a dependency for your application.

And now you can even use Swarms in Docker Cloud (still in beta) to manage orchestration. You also have autoscaling, load balancing (haproxy is a popular choice) and so on.

— — — — —

Author: Guiomar Valderrama (check me out)

Also check out Witei on Twitter, Facebook, or LinkedIn.

--

--