Continuous Delivery with Drone CI

Sergey Kolodyazhnyy
9 min readMar 23, 2018

A few years ago I was looking for a simple CI system for one of my projects and stumbled on Drone. It was new and had few rough edges, but I got convinced by open source CI system, which can be deployed in one container of under 10 MB, well integrated with Docker and GitHub.

In this article, I would like to explain a little bit how Drone works, and share some tips I learned over these few years. While I’m very excited about Drone, its ideas, and implementation, it’s really not the only CI system out there with such capabilities. Also, even though you can build quite complicated pipelines in Drone I still find myself using Jenkins sometimes. My goal really is to get you to consider container based CI systems and Drone in particular.

Drone

Drone consists of two main parts: Server, and Agent(s). Drone Server is a master part, a central piece which serves user interface and exposes API. Drone Agent is a worker part, it pulls jobs from Drone Server, executes them and pushes the results back. You can scale the system by adding more agents, so it can handle more jobs.

Interesting to notice, Drone Agent only runs a single job at the time. So, if you want to run multiple jobs simultaneously you should set up more than one agent. This approach helps to keep things simple and improve fault tolerance. For example, if an agent fails it only affects a single job.

Another interesting thing to know, Drone Agent is completely stateless. It’s designed to be able to fetch everything it needs for a build from somewhere else: docker registry, git repository, remote storage etc. It means spinning up a new agent is very fast and does not require any special provisioning or preparation.

There is also drone-cli, it provides command line interface to Drone Server API, as well as some other useful commands.

Build Process

The idea behind Drone’s build process is very simple, but yet incredibly flexible. Every step in the build runs inside a container, which uses an image with tools required to execute the step. These containers share a workspace volume, so things you build in one step are available in the next.

Very simplified, the build process looks something like this:

mkdir -p /workspace;
for step in $steps; do
docker run --rm -v /workspace:/workspace \
--workdir /workspace $step_image $step_cmd
done
rm -rf /workspace

For example, a typical pipeline for Go project would be:

  • Create git container, execute git clone to checkout source code to the workspace
  • Create golang container, execute go test and go build to run tests and build executable
  • Create docker container, execute docker build and docker push to build docker image
  • Create kubectl container, execute kubectl apply to deploy the project to the kubernetes cluster

Normally, every step uses a container image with tools required to run step commands. Small, one task focused, images help to keep things reusable and maintainable.

You are free to use any docker image in build steps, including 3rd party docker registries. Anything you can put in the container can be used during a build. You can use existing applications or write your own.

Important to mention, there is no shared storage between builds, workspace destroyed after the build is complete, containers for steps are destroyed after each step. Instead, if you want to share some files between builds, you can use remote storage (eq. Google Cloud Storage or AWS S3) and additional steps in your pipeline to fetch or push data. It may look like a burden, but makes your agents completely stateless, so you can scale them up and down as much as you want and not lose a single bit of data.

Getting started

First, you are going to need a drone instance, running somewhere on your infrastructure. Fortunately, the installation process is extremely simple and well described in the documentation. There is also a SaaS version, but at the moment it’s in closed beta.

Then you need to describe your build pipeline in the .drone.yml in the root of your repository. Here is an example of pipeline used above:

# pipeline describes build steps
pipeline:
# every entry in the pipeline describes a single build step
# you can use any name for the step
test:
# image used to create a container
image:
golang:1.8-alpine
# commands executed in the container
commands:
- "go test"

build:
image:
golang:1.8-alpine
# you can additionally specify environment variables
environment:
CGO_ENABLED:
"0"
VERSION: "${DRONE_TAG}"
COMMIT: "${DRONE_COMMIT_SHA:0:7}"
commands:
- "go build -o ./myapp"
publish:
image: plugins/docker
repo: skolodyazhnyy/myapp

You may notice there is no step for checkout. This step is automatically added by Drone, to simplify configuration. Although, it still runs in the same fashion as the rest of the steps.

Once you draft your pipeline, you need to enable build in the Drone interface, and that’s it. Drone will automatically run a build on every push, tag or pull request.

Enable Build in the Repository list

Check out official documentation to get more detailed information about .drone.yml parameters.

Tips and Tricks

Here are just a few tips I learned while using drone.

Drone Exec

First, there is an amazing feature of drone-cli, a command which allows executing build using local docker instance.

Simply run drone exec on your local machine in the same folder where .drone.yml is located and drone-cli will create a batch of containers and run build steps in them. There is a number of command line arguments to simulate different build conditions and parameters. It’s a perfect tool to debug your pipeline or test changes before committing them to the repository.

Custom Workspace

As mentioned before the only thing shared between steps is the workspace. Drone creates an empty folder before every build and then mounts it into every container.

There is a configuration which allows specifying a custom location where workspace folder is mounted and which work dir should be used. By default, the workspace is mounted into /workspace and work dir set to /workspace. But you can change these parameters in workspace section of .drone.yml. Here is an example:

workspace:
# this is the location where workspace will be mounted
base:
/app
# this is a work dir path, relative to the base path above
path: src

The configuration above will mount workspace into folder /app and use /app/src as work dir. Having work dir as subfolder can give you a bit more shared space between build steps. Because everything you create in /app will be available in all containers.

This is especially handy in Golang projects. You can clone your source code to the proper location in GOPATH using custom workspace configuration.

workspace:
base:
/go
path: src/github.com/skolodyazhnyy/myapp

This way all imports will pick up proper code.

Run Additional Services

Many applications have integration tests which require an external system, for example, a database server or a message queue server. Drone allows you to run additional containers (services) along with your build steps. These containers start in the beginning of the build and destroyed after build is complete. You can describe them in the services section of .drone.yml.

services:
rabbitmq:
image:
rabbitmq:3
redis:
image:
redis

These containers can be accessed using their names, just like when you link docker containers using --link option. For example, to connect to RabbitMQ server above you should use hostname rabbitmq.

Publish to GitHub

This one is useful for cli applications published using GitHub releases. You can set up a pipeline to create GitHub release and attach compiled binaries to it every time somebody tags a commit in git.

First, you need to create build step to compile your binaries. This step will be different for every language, but for Go, it may look something like this:

build-linux:
image:
golang:1.8-alpine
environment: { GOOS: "linux" }
commands: [ "go build -o myapp-linux" ]
when:
event:
[ tag ]

build-darwin:
image:
golang:1.8-alpine
environment: { GOOS: "darwin" }
commands: [ "go build -o myapp-darwin" ]
when:
event:
[ tag ]

I’m using step conditions here, so binaries are build only when somebody creates a tag.

Then, I use github-release to create release and publish my binaries.

publish:
image:
socialengine/github-release
environment:
GITHUB_RELEASE_VERSION:
v0.7.2
commands:
- "github-release release --user skolodyazhnyy --repo myapp --tag ${DRONE_TAG} --name ${DRONE_TAG}"
- "github-release upload --user skolodyazhnyy --repo myapp --tag ${DRONE_TAG} --name myapp-linux --file ./myapp-linux"
- "github-release upload --user skolodyazhnyy --repo myapp --tag ${DRONE_TAG} --name myapp-darwin --file ./myapp-darwin"
secrets: [ GITHUB_TOKEN ]
when:
event:
[ tag ]

It’s quite straightforward, you create “release” for a given tag and attach binaries to it. You can also automatically generate change log and attach it to release description.

Now you can release using git tag 1.0.0 && git push upstream 1.0.0.

Publish to Docker

Publishing to Docker is quite straightforward too. You can use Drone Docker plugin which will run docker build and docker push.

publish:
image: plugins/docker
repo: skolodyazhnyy/myapp
tags: [ "${DRONE_COMMIT_SHA:0:7}" ]
secrets: [ docker_username, docker_password ]
when:
event:
[ push ]
branch: master

This step will run every time when somebody pushes something to master, docker plugin will create a docker image and tag it with first 7 symbols of commit hash.

To get it working with private registry you will need to create secrets docker_username and docker_password with credentials.

Publish an RPM

Few projects I have been working on were deployed using RPMs. Surprisingly, it turns out to be very easy to automate this release process with Drone. All is needed is an image with CentOS and RPM builder, some RPM spec files, and image with SSH client (used to upload RPM to the yum repository).

First, we need to add RPM spec files and other related files somewhere to the repository.

Then, we add a step which will build RPM package.

build-rpm:
image: .../rpmbuilder:6
environment:
VERSION:
"${DRONE_TAG}"
COMMIT: "${DRONE_COMMIT_SHA:0:7}"
CENTOS_VERSION: el7
commands:
- "./rpm/build.sh"
when:
event:
[ tag ]

Script used to build RPM ./rpm/build.sh just moves few files around to prepare proper file structure for RPM and then runs rpmlint and rpmbuild.

Finally, we add a step to upload RPM to yum repository.

upload-sftp:
image: .../openssh
secrets: [ private_key ]
commands:
- "scp .../myapp.el7.x86_64.rpm yum@repo.com:.../RPMS/"
- "scp .../myapp.el7.src.rpm yum@repo.com:.../SRPMS/"
when:
event:
[ tag ]

The image used here is just alpine linux with openssh and a little script which creates private key from an environment variable when container starts.

Build Promotion

This feature allows you to promote certain builds to further deployment environment. For example, after every push, your application is getting deployed to a staging environment, but then at some point, you want to deploy the build to a production environment. This can be easily achieved using build promotion.

This feature is available through drone-cli. You add step conditions to steps required to deploy something to production.

deployment-production:
image: .../kubectl
secrets: [ kube_credentials ]
commands:
- "kubectl apply -f k8s/deployment.yml"
when:
event:
[ deployment ]
environment: production

Then use drone-cli to trigger production deployment. You need to specify a name for the repository and build you want to promote as well as the environment you want to promote to.

drone deploy skolodyazhnyy/myapp 151 production

Running steps in parallel

You can speed up your build process significantly by running some steps in parallel. To do so you just need to put processes that can run in parallel in the same group.

build-linux:
group: build
image: gobuilder
build-darwin:
group: build
image: gobuilder
upload-github:
group: upload
image: github-release
upload-ftp:
group: upload
image: openssh

This pipeline will run first two steps in parallel, these that belong to build group. Then, once both processes have finished Drone will run two other steps from upload group.

Secrets Interpolation

Secrets are meant to be used for sensitive information that needs to be injected into certain build step. For security reasons, you need to explicitly specify which secrets are available to which step. For example, you can expose secret my_private_key to deploy step like this:

deploy:
image: ...
secrets: [ my_private_key ]

It means during step execution you can access environment variable MY_PRIVATE_KEY with a value of the secret. But, here is a little catch, unlike other variables secrets cannot be interpolated into build step:

deploy:
image: ...
environment:
KEY:
"${MY_PRIVATE_KEY}" # this does NOT work
secrets: [ my_private_key ]

Hope it saves you a bit of time.

Git Submodule Override

Cloning is executed first and automatically checks out proper commit for every push or pull request along with any submodules. Default configuration works pretty much always. But in rare cases, you may need to change few things about how cloning works.

One of these is submodule overrides. Drone uses .netrc to authorize at GitHub which means it clones repositories using HTTPS protocol, so if you have your submodules configured using SSH you probably will see “Permission denied” error when Drone attempts to clone them. You can easily fix it by defining submodule override, like this:

clone:
git:
submodule_override:
"puppet": https://github.com/skolodyazhnyy/myapp-puppet.git

This will override git clone URL for submodule puppet to use HTTPS protocol instead of SSH.

Check out documentation for more cloning options.

Few useful links

--

--

Sergey Kolodyazhnyy

Software Engineer at Adobe, Golang and Kubernetes enthusiast and evangelist.