Why Your Startup Should Begin With Continuous Delivery

I started my company with the luxury of fresh projects, and a new-found desire for DevOps utopia. Now I want to share my thoughts on why you should absolutely start with Continuous Delivery from day 1, rather than hoping to get there once you have a product.

I’m not coming to this from a world of experience in DevOps or even in Continuous Integration, but over the last few years I’ve been educated in, and steadily become more hungry to deeper understand CI (Continuous Integration), and CD (Continuous Delivery). If you are beginning your journey as a software startup, I strongly recommend you look to adopting these practices at the beginning, and ingraining them in your culture. You’ll thank yourself for it later.

In my previous article, I wrote about my journey to a modern stack, and spoke a little about the tools we use at Relative, and in this article I’d like to look specifically at our development process, and how we managed to achieve Continuous Delivery as a cultural norm from day one.

Relative, at this stage, is two full-time developers. That’s easy. But the process I’m about to explain will work even better if you have many more than this. When our team is up to 5 or 6 developers is when we will fully realise the benefits of our Continuous Delivery culture, and it is precisely then that we won’t have time to turn the bus around and begin Continuous Delivery afresh. I foresaw this earlier this year when I was laying the groundwork for the company, because on my last project, I, as the CTO of a software startup, had ambitions to move to Continuous Delivery, and the migration project was forecast to take around twelve months, alongside our daily development workload. That was with one product, and a team of three engineers.

Let’s cut to the good stuff. What the hell is Continuous Delivery?

Well, I tried to explain it — but I’ll leave it to continuousdelivery.com to put it as succinctly as I’ve ever seen:

Continuous Delivery is the ability to get changes of all types — including new features, configuration changes, bug fixes and experiments — into production, or into the hands of users, safely and quickly in a sustainable way.

Now for some software organisations, putting changes into production usually looks like this:

  • Freeze the code (perhaps create a release branch)
  • Devise a testing script for all of the use cases being affected by the release
  • Manually run the testing script
  • Revise the code according to the output of testing
  • Release to a staging or beta platform and have some candidate users play around with it
  • Revise the code some more having spotted some bugs
  • Manually run the testing script again
  • Backup the production environment (hopefully?)
  • Create a deployment build
  • Deploy to production
  • Run manual “Smoke” tests in production
  • Hastily patch any apparent failings in configuration or functionality (or roll-back if possible)

The entire process can take anything from hours to weeks. I know, for me, it used to require an entire day.

Our current process looks like this (in human terms):

  • Merge feature branch to master
  • Make coffee
  • Run manual smoke tests
  • Press “roll-back” button if anything is broken

This currently takes 3 minutes to release to production and around 5 minutes to run smoke tests, and the last two steps are a few weeks away from being toast.

Now, the reason it looks so scant, is because all of the testing and validation is done up-front by the developers at the time of coding. We practice TDD (Test Driven Development) so you cannot write a line a of production code without first writing a failing test. You also cannot write any more code than is required to make the failing test pass. This forces us to have lots of unit tests that test the happy path — where everything works as intended(at the very least). We aim to also envisage and test the sad paths (where someone does something they shouldn’t) too.

We see the tests as the design phase of our work. The tests tell us what the inputs and outputs of each method ought to be. We “design” these as we write the test, and then implement the code once that design is known. When we write the test, we may or may not fully know what the implementation will look like.


A quick aside, while we are on the subject of building-in things upfront to save you pain later, we are meticulous about code style. We lint all of our code automatically, and it is impossible for a developer to push code that has either failing unit tests, or linter errors. This means our code is always in a “working” state. A critical factor in Continuous Delivery. CD is great, but if you’re continuously delivering broken code, you won’t get anywhere. Sometimes, code works, but needs some additional work to make a complete feature. In these instances we use feature toggles to ensure that in production, the feature is not visible to users, even if the code that makes it work is “live”. This requires careful design.


Overall, for me, the biggest factor in making Continuous Delivery a reality, is practicing Infrastructure-as-Code. Changes to configuration, networking, resources, security, storage, anything that relates to the application, are all completely handled using configuration files, which are checked in to the code. These files, along with a few helper scripts and our build pipelines (see below) mean that we can create a complete replica of the production stack, at any scale, on anything from the developer’s local MacBook, to a separate Development AWS Account, and everything will be identical to production (except the URL of course!).

Why do this? Well besides the obvious reason that you can create unlimited replicas of the production stack, and be certain they are the same, it means you can version your infrastructure. Track changes to it, monitor failures that can be attributed to those changes, and even perform A/B testing on it. Most of all — as you scale, you can move quickly, and there are none of those awkward moments when you go to create a new stack and the person who knew the magic sauce for routing traffic in a certain way has emigrated to another continent. Everything is recorded in black and white, or whatever your chosen text editor colour scheme is.

It’s tempting — with AWS and other cloud providers — to just click around in the management console and spin up and configure the resources you need. It’s fast, but it isn’t reproducible. This is another key tenet of CD. Reproducibility. Everything should be reproducible. Everything.

Now, I see companies advertising jobs for DevOps engineers, and I’ve often wondered how this works. To me, DevOps is a thing you do — not a thing you are. The nature of DevOps, to me anyway, is that Developers and Operations Engineers work together to form a DevOps culture. It’s a practice that requires multiple disciplines, but ultimately it requires Developers to develop with Operations in mind, and Operations to feedback to Developers continuously. If you have a “DevOps Team” that work in isolation, or even just slightly in collaboration with the “Development Team” then what you have is an “Ops Team”. At Relative, designing the resources required to run your code, is the responsibility of the developer. Even as we grow, there will be specialists in one thing or another, but nothing is passed “over the wall”.

To pull all of this together, we aren’t (yet) using any fancy orchestration tools. We host all of our git repositories in BitBucket, and use BitBucket’s excellent Pipelines tool to build, test and deploy our code.

A typical build pipeline involves:

  • Checkout the branch
  • Unpack or install dependencies
  • Run the test suite
  • Validate any infrastructure changes
  • Deploy the infrastructure changes and validate success or rollback and fail the build
  • Deploy the code changes (fail the build on error)

Build pipelines run (minus the deploy step) every time code is pushed to a remote branch. When code is merged (or pushed) to master, or any other branch that has an associated long-lived environment, the pipeline automatically deploys the code if the build is successful.

None of this was particularly difficult to set up. From no previous experience of this complete stack, the Infrastructure-as-code and build pipelines realistically took a day or two, and now we create them in minutes as a matter of course for every project. Production environments live inside their own AWS accounts, and the credentials for these accounts are the only truly manual setup required. If you don’t have a robust suite of tests, there’s not much point in attempting a CD process like this, as you can never be sure that everything won’t break. This is the primary reason, why I’d recommend you aim to bake-in CD from the start.

My top takeaways from this are:

  • Build a Continuous Delivery culture from the start. It’s harder to retrofit.
  • Continuous Delivery is everyone’s responsibility. Make them own it by forcing them to commit working, tested, linted code (Git pre-push hooks!).
  • We use a combination of Pair Programming and Peer Code Review to add QA inline with development. Definitely do this, but don’t allow Pull Requests to linger. They must be small, bite-size and at the very most, daily.
  • Write your infrastructure as code, if you can (of course you can!). It will change your life, and it becomes part of your build.
  • Practice TDD to make CD work. It’s the only way to be sure that, yes, you have tested that.
  • Above all, you must merge to master, and deploy to production quickly. Code must not be allowed to fester on a branch somewhere. This is the biggest barrier to CD, and it’s really only possible if you work in an agile way, and use feature toggles to control visibility.

Next on our agenda is to automate smoke testing and rollback in the event of a build failure, and to create an artifact repository to archive every build. Look out for more articles coming up, including handling production failures with CD and building insight and monitoring into your workflow.