Developer’s review of CircleCI 2.0

Lessons learnt while creating a complex delivery pipeline with CircleCI 2.0.

Marta Tatiana
7 min readJan 19, 2020
CircleCI Wallpaper by Ricardo Feliciano, https://www.flickr.com/photos/felicianotech/33219284208

Recently I had an opportunity to create a pretty complex delivery pipeline with CircleCI 2.0 service. CircleCI 2.0 is a CI/CD platform which comes in two flavors: cloud and server. I worked with a cloud version, which means that I got access from the account admin, logged in to the service and was ready to go.

The task at hand was to build and deploy a serverless application consisting of a few AWS Lambda functions, and then running a bunch of integrations tests. I needed to deal with practically every AWS Runtime there is (well, except for Go) and with the AWS Serverless Application Model templates, so there was a pretty wide and wild range of tools I needed. Luckily, the good news was there had been no preexisting automation or tooling, so I could come up with any solution I wanted.

Here are some lessons learned from my struggle with the CircleCI service and what I liked vs. what I didn’t like:

1. CircleCI CLI — Verdict: so so

CircleCI provides a CLI which, supposedly, can be used to run pipelines locally (provided all your tasks run using Docker executors). I tried that once, but the tool hanged, so I lost interest in attempting to run the workflows locally. I did use it to validate the CircleCI’s config and it spat out informative error messages, proving itself to be quite useful in this case. That for sure saved me many runs in the cloud.

2. Docker executors — Verdict: nice!

I needed to deal with .NET, Java, Ruby, Node.js and Python, so the Docker executors seemed to be the only thing that could have rescued me. As opposed to machine executors, which run jobs in virtual machines, Docker executors run jobs in Docker containers. Images are specified in the configuration file using an image key. Docker executors spin up with impressive speed (CircleCI docs say it is “Instant”).

I did not have a Docker registry I could use, so even though I had some concerns about performance, I decided to build the images “on the fly”. That is, one step in the job built the Docker image, and the next one ran it, without explicitly storing the intermediate image anywhere.

What I really like in this approach is having the Dockerfiles in the same repo as the .circleci/config.yaml (only and main file with CircleCIs workflows config), which means that the next person who needs to update the stuff — for example, build a version for a new Lambda runtime provided by AWS — can do it all in the same project. Even better, chances are that they will not actually need to do it, which brings me to the next point…

3. Pre-built Docker images — Verdict: a killer feature!

CircleCI maintains a nice set of pre-built Docker images. In addition to the main language dependent runtime, such as Ruby or Java, they install a well-calibrated list of tools, such as git, curl, zip, wget etc. I believe the images that CircleCI provides are sufficient for many use cases, which lightens the burden of maintaining images by yourself. A very nice feature!

4. Isolated Docker Engine — Verdict: very convenient for some use cases

Did you get anxious reading the previous points about Docker? I used Docker executors (which run jobs inside Docker containers) to build and run Docker images. That would be bad — running Docker in Docker is not a good idea for reasons explained here. To that end, CircleCI has a setup_remote_docker feature, which spins up an isolated environment in which you can safely build and run Docker images, even when using Docker executors. It also comes with a docker_layer_caching switch, which makes the unchanged layers accessible between jobs in the remote environment. That makes subsequent jobs using the same image really fast!

This solution, however, does not suit one use-case, which is when you need to retrieve a file produced by your container in the remote environment. There is no good solution; I came up with a dirty trick to solve this problem. The container printed the output file to its standard output as the last step of its execution. Since the standard output is captured by the primary Docker container (the one used by the executor), the captured output can be redirected to a file. See details here.

5. One file to rule them all — Verdict: how come you haven’t solved this yet?!

My build pipeline proved to be pretty complicated. The config.yaml holding CircleCI’s config kept growing and growing, despite me taking advantage of all the yaml and CircleCI syntax features I could find. I really pushed for elegant and concise syntax:

  • I used yaml anchors and aliases to define common data once, and then just referenced them in other parts of the yaml file (that works well for build settings and credentials). A nice explanation of using yaml anchors and aliases in a sister case of a Docker compose yaml file can be found in this article.
app_image_settings: &app_image_settings
image_name: app-image
env_file: prod/app.env
  • I defined many parameterized commands, in order to avoid defining multiple jobs which would differ just on the name of the Docker image and the script that needed to be run in it. I tried to keep the all the variety inside my Docker images and said .sh scripts. The snippet below shows a sample definition of a run_docker_with_prod_credentials command, which has 3 parameters: variables_file, image_name, and script. The command has a single step, which runs a script in a Docker image, both of which have to be passed as command parameters. The << parameters.script >> bit is a reference to a parameter named script, which, if it does not have a default value, becomes a required one.
run_docker_with_prod_credentials:
parameters:
variables_file:
type: string
image_name:
type: string
script:
type: string
steps:
- run:
name: << parameters.script >>
command: >
docker run --env-file << parameters.variables_file >>
--env PASSWORD=${PASSWORD}
<< parameters.image_name >> ./<<parameters.script >>

This approach allowed for most of my jobs to look quite generically, like in the snippet below. Here we have a complete job named publish_app which is executed in a Docker image provided by CircleCI: node:10.16.3. The job first checks out the repository for which the whole build is defined. The command setup_remote_docker_with_caching is a custom command I defined elsewhere in the configuration, which is simply a setup_remote_docker step with docker_layer_caching enabled (I defined this as a command just for the sake of brevity). Then there are two more custom commands, which take predefined settings stored in the app_build_settings and app_settings references. The run_docker_with_prod_credentials step needs one more parameter script, which is merged with the app_settings map.

publish_app:
docker:
- image: circleci/node:10.16.3
steps:
- checkout
- setup_remote_docker_witch_caching
- attach_workspace:
at: app/data
- build_docker_image:
<<: *app_build_settings
- run_docker_with_prod_credentials:
<<: *app_settings
script: publish_applications.sh
  • Another thing which added to the length of the .config.yaml file was that I wanted some of my jobs to require manual approval. With this being a pretty common requirement for CI/CD pipelines, I expected such a setting to be just a job setting, like this:
workflows:
jobs:
run_this_cool_job;
requires_manual_approval: true

It turned out, however, that in the CircleCI’s syntax, a manual approval is a special type of a job, which needs to be listed as a separate step. The subsequent job must then require it to be completed in the following way:

workflows:
jobs:
manual_approval:
type: approval
run_this_cool_job:
requires:
- manual_approval

With many manual steps in a pipeline this gets pretty verbose with one additional “artificial” job for every actual job that you want to run. It may not be much, but in an already too long yaml file, every inefficiency of this kind annoys.

I am really disappointed that all CircleCI config needs to be in a single yaml file —a possibility to split it into smaller parts seems like a must have feature. One could imagine support for a directory layout which would allow for defining constants, commands, jobs and perhaps even workflows in separate files. Now, in CircleCI there is a concept of Orbs, essentially libraries for sharing common steps in the pipelines, usage of which could mitigate my problem. I decided not to give them a try, as that would require me to invest time in a task very much specific to CircleCI service, which I am not guaranteed to use in the future. I relied heavily on Docker containers and shell scripts executed inside them in order to avoid a vendor lock-in as much as possible. Using Orbs would go against this philosophy.

6. The UI — Verdict: keep working!

CircleCI’s UI is work in progress, being gradually switched to a new experience. It was difficult for me to navigate back and forth between some screens, especially when they changed from the old to the new version. I am waiting for the final result and hoping for the inconveniences of this intermediate stage to go away.

In summary…

Altogether, I had much fun working with the CircleCI service. I encountered some quirks and inconveniences, but, on the other hand, I was trying to accomplish a pretty complex and specific task. My guess would be that had I been working on more standard things, it would have gone much smoother, as the predefined images and the library of existing Orbs could already cover most of my needs. I keep my fingers crossed for CircleCI’s growth and development — for the sake of the fast and easily configurable builds for us all.

--

--

Marta Tatiana

programmer. I write to learn. All opinions are private and do not reflect views of my employer, past or present.