Say yes to auto-deploy, it can be only one click away!

Berry de Witte
wehkamp-techblog
Published in
5 min readSep 27, 2019

At Wehkamp we have a pretty nice, fast and optimized process in place for getting our code easily and automated to development and production environments in AWS. But somehow we were always a little bit reticent to turn on the full auto deployment for production. But why?

My philosophy is that the master branch should always be correctly tested and production ready. It should never block you from going to production!

I think the fear of giving up control or the worries/lack of confidence that you might actually break something in production made us a bit cautious. But what if your code is approved and tested properly, then you should have no fear right? Right! That's why we decided to enable auto-deploy, it’s makes the life of a developer just a little bit easier. Let me try to explain how it works.

Github

First of all, where does our code live? In our case that’s Github, which is a git repository hosting service with a web-based graphical interface that makes it easy for developers to collaborate on.

In our day to day workflow we create branches of a master repository, do our magic in the code and then offer it in Github as a pull request to one of our co-workers who can then review our proposed changes. At the same time a tester can manually deploy this branch to a development environment and test if the changes we made meet the acceptance criteria.

Everything’s approved, let’s squash and merge!

If both the pull request is approved and the acceptance criteria are met, we can click on the big green button in Github and merge our branch into master. The automation starts here, but how does our code get automatically shipped to production?

Automation with Jenkins

For the automation part of our process we use Jenkins, which is our continuous integration (CI) and delivery (CD) server. It is the center of all our deployment automation. In our CI server we have a seed job running which collects all our repositories and branches from Github and creates build definitions for new repos and branches. When it detects a change in a branch/repo, a new build of the site will be triggered. Our sites run on NodeJS and when the installation has succeeded and the unit /integration tests are passed, the newly created image of our site will be put into a Docker image on Docker Hub after which the CD server will be triggered to start the deployment pipeline.

The pipeline, Jekyll or Hyde?!

Within the CD server we’ve created pipelines for our sites. After a Docker image is deployed on the development environment it can automatically start a set of functional tests. In our case, we use Cypress which is a Javascript-based end to end testing framework. Only after those are passed it will try to deploy our image to the production environment.

All greens, that’s what we like like! \O/

In the end, in the happy flow, the only thing we had to do was the one-click on the green merge button in Github and our brilliant code was shipped to production, thanks Dr. Jekyll!

Unfortunately, there’s also the unhappy flow where Mr. Hyde shows up; always at the wrong time of course. We have a failing ‘red’ pipeline, this usually means we broke something on the site or changed something in the code which made our tests fail. In some cases it could also be that there’s a hick-up in the process, in which we just restart the pipeline. Otherwise it’s back to the drawing board, fix the failing tests and try again.

Oh no, why?!

Getting notified

After you clicked the button in Github, you don’t want to manually check Jenkins every time if a deployment has started. That’s where the Slack integration for Jenkins comes into place. It can notify us in a channel what’s happening with the deployments. In the below cases we have an example of a successful and failed deployment.

All good, we shipped to production!
Caramba! The pipeline failed..

Who’s responsible for maintaining the pipelines?

Well it’s not all about the pipelines. If you want to make a success out of auto-deploy and less breaking stuff in production, you would also want to have good coverage on unit and integration tests. The developer that created the code should be responsible for this. Try to be sharp when performing a review on someone else’s pull request as well.

Then the pipelines themselves, if you have no testers on your team I would say the developer is also responsible for this. Otherwise it should be a joined effort with your tester. Work on them together if possible to always have a green pipeline. If you know as a developer you’re going to break an existing test while developing, you should already update them in your branch or ask the tester to do so. The nice thing about Cypress is that you can also run it locally very easily.

Adding tests can also be done within the same branch, but can also be done in a later stadium. The side effect of adding tests later to master is that it will also deploy to production if the tests succeed even though there’s no real code change.

Viva la auto-deploy!

Enabling the auto-deploy saved up time for us to work and focus on other things as we don’t have to manually check CI/CD anymore. It made us also more involved with the testing process and aware of the importance to keep our pipeline’s green. Another benefit of having this, is that it also leads to more confidence. Seeing a green pipeline after new things are developed and knowing you didn’t break anything, that’s a great feeling!

But as nice as this might be, don’t forget to always check production after deploying :)

Thank you for taking the time to read this story! If you enjoyed reading this story, clap for me by clicking the 👏🏻 below so other people will see this here on Medium.

I work at Wehkamp.nl one of the biggest e-commerce companies of 🇳🇱
We do have a Tech blog, check it out and subscribe if you want to read more stories like this one. Or look at our job offers if you are looking for a great job!

--

--