Speed is all you need! Moving towards continuous deployment.

Jože Rožanec
3 min readJul 8, 2020

--

In a previous post, we described our journey towards continuous delivery (CD). Implementing the practice provided us great benefits, but deploys were still handled manually based on a checklist, consuming precious time from our development team and always risking some human error. Time is always an important factor since means a cost of opportunity: if the task can be automated, your developers could be developing new features or fixing bugs. In the end, this translates not only to money but also to developer happiness: they deal with creative work and avoid a checklist :)

CD is based on five principles: build quality in, work in small batches, computers perform repetitive tasks, relentlessly pursue continuous improvement, and that everyone is responsible. It seemed to us that the journey was not complete if we did not enhance the deployment process, providing at least some degree of automation at it. The final goal was to implement a limited version (single site, not facing our clients) to showcase how continuous deployment could help us eliminate manual work and time invested on deployments, and help us always stay current on the latest version while minimizing downtimes.

The first step we took was to translate some of the manual tasks into scripts, to ease and accelerate the manual deployment procedure while at the same time reducing the possibility of human error. In previous software development cycles, we already made sure a process was put in place to upgrade the database when required, before starting the services. In our CD pipeline, we added some smoke tests that made sure only working Docker images got published: broken images did not make it to our Dockerhub repository and any rollback would be back to a working version. We still had to make sure configurations related to specific working environments would be always up to date, so we versioned them and made sure would be enforced when deploying a new version.

Continuous deployment requires solid monitoring, in order to detect any issues when changes take place. We configured monitoring and alerts through Uptime Robot, ensuring we only get alerts when something goes wrong, to avoid alerting fatigue. We also registered our services with Unix policies, to make sure they would restart if they crashed or the server was restarted.

Our final setup is simple. We made sure we get notified when a new image is published on our Dockerhub repository, download the corresponding images and associated settings to then restart the service and issue a notification to the developer team. Even though at first we released a new image per working commit, building from the master branch, ensuring latest changes were streamlined to this environment and eventually get early feedback from the deployed version, we later adapted to the general development policy, to only retrieve images built from a “frozen” branch, where more in-depth verifications are made before each release.

Since we implemented continuous deployment, we mostly had no issues regarding deployments. Minor interventions were required for maintenance, while many hours were saved on deployments ensuring we stayed at the latest released versions with almost no delay after the images were published.

Did you have the chance to work in a similar setting? We will be glad to hear and learn from your experience as well!

--

--

Jože Rožanec

Software Engineer interested in working at the internet industry, where agile development meets scale. Interested in the intersection of ML and big data.