Patterns for Continuous Integration with Docker on Travis CI
Part 2 of 3: The “Docker repo” pattern
Update: part 3 of this series has been published.
In Part 1 we set up a Git repository with a Dockerfile and Travis file. At the end of it we had something like this:
This was very simple: we took a Dockerfile and built it into a Docker image using Travis CI. But where does the source code for our project fit in? Next, we’ll look at a pattern for building Docker images that integrate with existing source code repositories.
Before we start, let’s make one thing clear:
Docker is not a replacement for proper packaging of your software. Using a Docker image should not be the only way to get ahold of your software.
Let’s sketch out what a typical setup looks like that uses GitHub and Travis CI for building, testing, packaging, and releasing software:
- We want to deploy a piece of software, let’s call it
- The source for this software is hosted in a GitHub repository called
- The source of the GitHub repository is built and tested by Travis CI.
- When a new version of the software is released, the code is tagged with its new version in Git.
- When the tag is pushed to GitHub, Travis CI builds and packages the software.
- Travis CI does a release of the software by uploading the package to some kind of package repository or index.
Packaging the software could mean different things depending on the kind of software involved. For a Golang project, this could mean compiling the software into a binary. For some Python software, this could involve zipping the source code into a Wheel. The software is compiled or collected into a known package format.
Similarly, the way that the packaged software is released could vary from platform to platform. Some Ruby package could be uploaded to RubyGems. A Rust project could be uploaded to Cargo’s crates.io. The software package is uploaded to a package store or repository.
The way that a project is packaged and released will generally vary with the programming language in use. Before we even start using Docker we should already be using the packaging and release system best suited to the software at hand. Docker images can be adapted to leverage these systems, rather than replace them.
The “Docker repo” pattern
In this pattern we create a separate Git repository specifically for the release Dockerfile. This keeps the Docker-specific code, which could be considered a deployment detail, isolated from the actual software. Developers can continue working on the source software as usual, while the production Docker image is developed separately.
For our hypothetical
acme-corp/cake-service repository, let’s create another hypothetical GitHub repository:
acme-corp/docker-cake-service. This repository won’t have much in it to start with — it’ll be a bit like in part 1 of this series: just a
Dockerfile and a
For this example we’re going to pretend that
cake-service is some Python software:
- This time we start with quite a specific image:
python:3.6-slim. This means we get Python 3.6 and we use an image that has been trimmed to a smaller size (that’s the
-slim). This is our production image so it’s potentially going to need to be shipped all over the place — let’s make it as small and portable as possible.*
- We set the latest version of the software in an environment variable called
- Next, we install the
cake-servicepackage from the Python Package Index (PyPI). We do this using the canonical Python package manager, pip. (We install with
--no-cachein an effort to keep the image smaller.)
- We set
cake serveas the command to be run when the container starts.
* This isn’t actually the smallest possible base image we could use. Some people use more esoteric base images such as Alpine Linux to achieve very small images. In most cases we don’t believe it’s worth heavily optimising for image size, particularly since any difference in the size of base images can be offset by sharing a single base image across multiple images.
Next, let’s see the Travis file:
This looks a lot like the final example in part 1. The biggest change is the addition of versioning. In
before_script we run an
awk command to parse the value of the
CAKE_SERVICE_VERSION environment variable to get the version of
cake-service in the image.
Then, in the
before_deploy section, we tag the image with both the
latest tag and the version. Finally, in the
deploy section we push both tags to Docker Hub.
Note that, for the
scriptfield in the
deploysection, we can only execute one command without using an external script. As you can see, tagging and pushing takes a few steps. We have developed a tool to get around this which will be explained in part 3 of this series.
Let’s try to piece together the whole release workflow:
Two things to note here:
- That dashed line for step 4. This could be a manual step where somebody goes and updates the value of
- The final step, step 7. At this point we have a production, versioned Docker image stored in Docker Hub. We can tell Docker Hub to fire a webhook when an image is pushed to a repository. Or we could hook this up to infrastructure that runs the container and deploy the new image automatically.
For these two points, we don’t have a turnkey solution for all projects. These kind of links between phases of a continuous integration workflow are often the most difficult to set up but also the most important. Unfortunately, this is a limitation of the mix of free services we are using here. A more integrated CI service might make these steps easier but is likely to be less flexible, and probably not free.
Development Docker images
One thing to notice with the above workflow is that we’re only releasing a production Docker image. What about all the pre-release, development code that isn’t in a specific versioned release? Well, we can get Travis to build that too.
Let’s move back to the GitHub repository for the project source,
acme-corp/cake-service. Here we’ll add a development Dockerfile. This will be a bit different from the production Dockerfile, as we won’t be installing the software from a versioned release. Here’s an example development Dockerfile:
We keep this Dockerfile very similar to the release Dockerfile, to increase the chances that any issues will show up in development before the software is released. The main difference is that we copy in a package that we have built locally, instead of downloading from PyPI. (We’ll get to building that package when we look at the Travis file.)
Let’s add one extra file this time around: a
.dockerignore file. The
.dockerignore file documentation explains why this is a good idea and how the syntax differs from the conceptually similar
.gitignore file. Here’s what ours looks like:
We use the wildcard
* to ignore all files by default. We then selectively include just the files we need — in this case the built packages (
.whl files) in the
.dockerignorefile is particularly important in cases where you might be copying in the entire working directory, i.e. doing
COPY . /myproject. Copying in the project’s
.gitdirectory is generally a bad idea. This directory is often very big and performing almost any Git operation will cause files inside it to change which will result in the Docker image layer cache being invalidated.
Finally, let’s look at the Travis file. Remember, we’re looking at the Git repository for the source code,
acme-corp/cake-service, not the repository for the production Dockerfile,
acme-corp/docker-cake-service. So, in this case we still need the standard build process for our Python software, and on top of that we add a separate task specifically to build the development Docker image.
The Docker-specific steps are added in the
matrix: include: field at
... and look like this:
Again, these steps should look familiar. There are a few differences:
- We need to build the source into a package. This is the
python setup.py bdist_wheelcommand.
- We don’t have a version number so we can’t tag with the version, rather we use the short SHA of the latest Git commit.
- We don’t push the
latesttag — we tag the image with the name of the branch,
Here is the entire workflow:
For a full working example, see these links:
- GitHub repository with project source and development Dockerfile
- Travis CI builds for the source repository
- PyPI page for the Python package
- GitHub repository with production Dockerfile
- Travis CI builds for the production Docker image
- Docker Hub repository
- pyup.io page for the production Docker image
That’s it for part 2 of the series. This should provide you with enough information to set up a production-level workflow for building Docker images for your projects using Travis CI.
Check out part 3 of this series: Python tools for tagging & testing