Building continuous delivery from the start
A lot of projects start with a big “build” phase.
When you start, the engineering and product folk come up with a big plan of what they’re going to build, and that becomes “V1”. Until that first launch, the business isn’t running, everybody’s waiting on the engineering team to finish and crucially, you’re not learning.
Our team were clear from the start that we would take the opposite approach — build the product in small iterations and release those pieces as soon as they were done to enable learning straight away. The pieces that aren’t online yet are handled offline via email, internal communication and spreadsheets — it’s not pretty but it means we can learn earlier, talk to our customers earlier and validate our assumptions earlier. It also means we make money earlier.
This isn’t a new idea — we’ve been implementing agile development in our previous roles for years and years — but it seems that running continuously from the beginning of a startup journey and not doing that “V1” phase is seemingly more rare.
Right from our outset, we are following a continuous delivery pattern across our architecture, which takes a broad microservices approach in that we have small services with functional capabilities (book a viewing, sign a contract). Every component is deployable independently and has full support for rollbacks.
We make changes using TDD/BDD in a branch which is then pushed and merged when green and any code review is finished. Once merged, master is built and if tests are green, the staging infrastructure for that component is updated. Once we’re happy we’re good to go live (currently that’s after some manual acceptance testing, but we’re going to automate this), we promote the artefact from staging to production. This means that we’re already deploying to production several times a day, with zero downtime.
To support this mindset, here are some of the tools we’re using. In previous roles I’ve been used to a pretty standard stack of GitHub, Chef, Amazon Web Services and an external CI system — it’s been fun to use some other tools that, although extremely established and well-known, have moved on a lot since I last used them.
For years it felt like GitHub was the only option, but we were keen to try GitLab and we’ve been presently surprised. The interface takes some getting used to but it’s being constantly improved, and the runners feature is fantastic, allowing us to deploy automatically on every successful build. It’s nice not to use an external CI system.
The free hosting on Gitlab.com is quite slow and you will often wait for a while for a build during busy times (ie, afternoon UK time, morning US time), so we have built our own setup on Digital Ocean — they have pre-built droplets for the latest version of GitLab, so this was super quick. We tried Amazon LightSail for this, but the performance was very poor and the setup was difficult. Digital Ocean has been much better and scaling runners horizontally is really easy.
We use the GitLab API to display a very basic status page, which we then Chromecast to a big (but cheap!) TV monitor.
We’re deployed entirely on Heroku. I haven’t used Heroku for a microservices architecture for a few years and last time I did, it was clunky. Fast forward to now and it’s been really slick. The pipelines feature is great and allows us to promote builds easily — although we don’t use the Heroku UI for this because of the lack of support for things like database migrations.
A conscious decision made early was to use ENV vars for config throughout our system and not to go the usual ruby route of baking lots of stuff into yaml, which requires deployments for every config change. This mindset is very helpful in putting together an effective, pipelined Heroku setup, and support from simple gems like dotenv makes it pretty painless.
We kick off our day with a standup meeting around a Trello board that represents our entire value stream from “Idea” through “Analysis”, “Design”, “Development”, “Staging”, “Ready” and “Live”. During our standup meeting we talk through each card starting from the right hand side — because that work is the closest to being completed and contains the most value. The aim is to keep cards moving, get them live and in front of customers as soon as possible and to align the team on everything that’s going on.
We’ve adopted the JSON API standard throughout our platform — the clients for both python and ruby make it easy to work with and have given us a reference to lean on when designing APIs, which has saved a lot of debate and research.
It’s been great getting stuff live and working early and learning quickly… especially when we’ve made mistakes and incorrect assumptions but we’ve found them out now, rather than in six months from now.