Continuous integration and delivery at Aircall

Damien Duhamel
Inside Aircall
Published in
7 min readMar 13, 2019

Even if most of our processes follow common software practices, we wanted to share a few things we’ve learned at Aircall about continuous integration. Before I proceed, let’s clarify that our stack is powered by CircleCI, Github, and Docker Hub. All examples in this post are based on these platforms but you can easily find alternatives for other tools such as Jenkins, Travis CI, Gitlab or Bitbucket.

How we improved the way we ship code by using workflows

Workflows are used for orchestrating a set of jobs and support a lot of rules:

  • Requirements jobs.
  • Fan-out to run multiple jobs in parallel.
  • Fan-in to handle several asynchronous jobs.

The most interesting are the triggers you can register to start your workflows, we mainly use three kind of triggers. Let’s see how these triggers have improved the different part of our code delivery.

Push to a branch: Code review and sharing with non technical members

Each commit pushed to a branch will automatically launch jobs. It considerably helped us improve our code reviews:

As you can see, all of our pull requests are opened with a list of checks (linter, unit tests, …). We also have a Pull Request Checker to make sure we respect commit conventions and other stuff. All those checks are helping to stay focused on what really matters: code quality.

Designers are included in each pull request validation process thanks to a direct link to Storybook. To allow them to check that the integration is in line with the mockups. A branch preview is also deployed with Netlify for each pull request to provide our product team a way to test the functional scope.

Create a git tag: The way we release our code

Like most VCSs, Git has the ability to tag specific commits in a repository’s history. This functionality is commonly used to mark release points. Circle CI triggers can be registered and executed when a git tag is pushed to the origin server. This is very useful especially with Github releases which are built on top of git tags and add some metadata around them. Thanks to that, it’s possible to have a clean changelog following a semantic versioning automatically generated by the CI and centralized in the Github releases.

A changelog automatically generated by our CI during the release process.

It can also help to not rely on one single person. For example without a CI, it can be quite painful to build our desktop application for a release. You have to install the stack locally, build the project, without mentioning the keys and other certificates that might be needed, and then upload it… Now, the only thing that needs to be done is to go to the GitHub releases of the project and click on “Draft a release”.

Scheduled jobs: Test our entire stack periodically

Some specific jobs don’t need to be run on every single commit. Instead, you can schedule jobs to run at a certain time for specific branches. We have for example some integration tests running across our entire stack, impossible to be run when we push a commit because they are linked to several repositories or third party applications. So we choose to use scheduled workflows planned to be run every 30 min to test the core features of our stack.

In the same way, we have some performance tests powered by Puppeteer that run periodically in our front-end projects. We first thought to run these tests before each releases or in the CI of the project. However performance may be degraded by other elements outside the project: a new browser version, a third party script, our back-end responses time or payload size… So the best solution was to run them on a regular basis.

Improve, speed up and standardize

At Aircall, we have tens of projects running their own CI, so we were facing several issues:

  • A lot of jobs, with a growing execution time and cost.
  • Different practices and a lack of standards.
  • Security.

Here is a few tips & tricks we found to fix these issues:

Executors & dockerization

Executors define the environment (a docker image) in which the steps of a job will be run, allowing you to reuse a single executor definition across multiple jobs. There are a lot of benefits to use executors.

First, it will clean your CI configuration by removing repetitive tasks from workflows. All your dependencies (AWS cli for instance) can come directly with your docker image instead of having to install them at each job.

Secondly, using a Docker image will consequently speed up the setup of your jobs. To build Aircall’s Windows application on a CircleCI, a lot of dependencies need to be installed (like WineHQ or mono). Instead of installing them, we simply build a docker image with all the dependencies needed inside.

Before on top / after on bottom: ~75% faster and no more errors during the dependencies setup

Cache & workspaces

A unique workspace is created for each workflow. It is used to transfer files to downstream jobs as the workflow progresses. Using workspaces allows to clearly split the different jobs of workflows:

  1. Checkout codebase and install the dependencies.
  2. Run unit tests.
  3. Build / compile application.
  4. Compute the changelog.
  5. Deploy the built assets of the step 3, upload the coverage of the step 2 and create a GitHub release with the changelog of the step 4.

Having each job scoped to a simple task allows to re-use them in different workflows and adds modularity. CircleCI gives the possibility to add parameters to jobs, they then become even more customizable. For example, the exact same job can deploy your project in staging, beta and production.

Contrary to workspaces which will persist data vertically in a workflow, a cache system will help to share data horizontally between workflows. The perfect use case is to cache project dependencies. Once a project is setup, dependencies shouldn’t change often. The idea is simply to cache gems folder for a Ruby project or node modules for a JavaScript one and use a checksum of the `Gemfile` or `package.json` as a key:

jobs:
install-dependencies:
steps:
# Restore a potential cache.
- restore_cache:
key: node-modules-{{ checksum "package.json" }}
# Install dependencies.
- run: yarn
# Cache dependencies folder.
- save_cache:
key: node-modules-{{ checksum "package.json" }}
paths: node_modules
We saved about 3 min for every jobs of a front-end project by caching node_modules folder.

Orbs

Orbs are packages of config containing the following elements: commands, jobs and executors. You can find a lot of orbs published by famous platforms, for example Slack has an orb to push notifications or AWS has one for working with their aws-cli. They are perfect if you are doing something standard without too much configuration and you don’t want to reinvent the wheel:

version: 2.1# Declare used orbs
orbs:
aws-cli: circleci/aws-cli@0.1.11
jobs:
invoke-lambda:
# Declare orb executor
executor: aws-cli/default
steps:
# Use orb commands
- aws-cli/install
- aws-cli/configure
- run: aws lambda invoke --function-name my-lambda ...

Shared scripts

A lot of our repo each had their own scripts to do similar tasks (compute a changelog, upload a release on GitHub…). Consequently these scripts are similar but different so we have to know each little variations from one project to another and when a script is improved or fixed in a project other projects don’t benefit from it. For dealing with this issue we wrote JavaScript scripts brought together in a npm package and executable with npx. Here is an example of how it’s convenient to create a Github release with an automatically generated changelog:

jobs:
release:
steps:
- run: npx ci-scripts release

Security

One key point about security is to avoid having credentials in codebase. By using workflows with scoped jobs, executors with Docker images, orbs and shared scripts, you force the blocks of your continuous integrations to be customizable and operational in every environment. In order for this to work properly, you have to put almost everything in environment variables.

So now, you know how our continuous integration works, heavily powered by Github, Circle CI, and Docker. It took some time to setup but we are now saving a lot of time daily and we are always trying to find ways to make the entire flow a bit more seamless. However, continuous integration isn’t just a way to spare time and decrease repetitive tasks we all hate. It allowed us to improve our code reviews by being focused on code quality or by providing tools to allow non technical people to test developers work. Continuous integration is also a good way to reduce human errors thanks to tasks automatization. Using continuous feedback mechanisms we can quickly iterate and bring our product to market faster. Integration bugs are detected early and are easy to track down due to small change sets. This saves both time and money and creates a better product for our customers, which is the real promise of agility.

PS: Want to join our great team? We’re hiring!
Apply now on
https://aircall.io/jobs

--

--