Let’s dive into the world of CI/CD pipelines!
In this article we are going to dive into the world of CI/CD pipelines. There are a lot of different options for incorporating a CI/CD into your development workflow. This article will focus on the concepts and benefits versus presenting a single CI/CD service.
All though, if you’re interested in a CI/CD which is built for modern web development. I would check out our articles on Google Cloud Build.
- Google Cloud Build — Create/Store Docker Images via GitHub Trigger
- Google Cloud Build — Custom Scripts
What we will cover:
- What is a CI/CD?
- Why should you use a CI/CD?
Awesome, let’s roll up our sleeves and get into the thick of it.
What is a CI/CD?
Well to be fair CI/CD is actually the combination of two different concepts.
The first is called CI (Continuous Integration), a powerful way to continuously build and test your code. Usually, the process is triggered off an update to an SCM (Source Code Manager). After the trigger fires, your CI will begin to build your application and then run tests and other operations against the entirety of your application.
- The code is cloned from SCM (e.g. GitHub)
- Build scripts, create a new version of your application
- The new version is put through the wringer in the form of a series of automated tests and other application-specific operations
- Automated execution of test suite off SCM trigger
- Fewer production rollbacks due to catching issues early
The second is called CD (Continuous Delivery), a powerful way to continuously deploy your code live to end users or to various environments (e.g. dev, QA, prod). Similar to CI, the process is kicked off by an update to an SCM. Which will then run a series of operations which is completely customizable by your team. These operations are usually packaged up into a series of bash scripts (e.g. bash, python) which handle interacting with your service providers.
If you’re interested in diving deeper into python bash scripting which is EXTREMELY powerful. Check out this article, Bashing the bash replacing shell scripts with python. It’s a great article!
- The code is cloned from SCM (e.g. BitBucket)
- Build scripts, create a new version of your application
- The new version is pushed/synced/uploaded to your hosting provider and replaces the old version of your application
- Automated deployment process (less human errors)
- Release new versions of your application quickly!
The combination of these two concepts, allows developers to now handle both automatically testing their code and automatically deploying their code. All in a single swipe. When you do it in this fashion you give your team the flexibility to have builds break if the code fails some type of testing, prior to being deployed and replacing your live application.
Why should you use a CI/CD?
Well hopefully, the explanation of the concepts above. Were convincing enough to make you consider adding this into your team's backlog. If not, let’s think through what happens when you have a CI/CD in place versus when you don’t.
With a CI/CD:
Automation. Automation. Automation. Your team is able to build confidence with every push. Increased visibility allows every member of the team to understand the full process versus separating the responsibilities in a way where every team member is a walled-off silo.
As potential problems pop up, they are added into the CI/CD to increase the overall strength. Something that would be hard to do without a CI/CD as you wouldn’t have a single source of truth for what should happen at each phase in the application deployment lifecycle.
For instance, if you had a problem pop up where unit/integration tests locally didn’t account for issues that pop up when you’re application is deployed live. Then you could incorporate a rollback strategy, add smoke tests as a final sanity check and lay the foundation for future edge cases where local vs live don’t align.
Without a CI/CD:
Developers write new code, run some tests locally (maybe?), and push the code to your SCM (ideally as a pull request). Depending on the level of the established process your company has, the code is extensively reviewed or briefly reviewed and merged. Once merged, the code is manually deployed and replaces your existing application.
A problem pops up, the deployed code is preventing users from signing up. Unfortunately, the code has already replaced your live application.
How did this happen?
The code had unit tests run, but not integration tests run locally. The reviewer did not manually clone down the code and run the required checklist of tests against the code and neither did the developer who wrote the code.
Your team scrambles. First, they check the logs (hopefully your team has a logging strategy) and see that the third party database API is complaining about a bad connection string. Second, they review the latest code that was pushed and confirm that the connection string is slightly off. Third, they make the required connection string change. Fourth, they rerun all the tests including the missing integration tests which the author and reviewer didn’t run. Fifth, they push the updated code to their SCM. Sixth, they manually confirm that users can now sign up.
You may think, well why don’t they have all their tests grouped together in a series to prevent a single developer from making this mistake? You’re right. However, little mistakes like this happen all the time and that’s where the power of a CI/CD comes into play. A CI/CD gives you a single place to continuously improve your process.
If this team instead had a CI/CD in place. The CI/CD would have had a hardening process for testing which would be applied every time new code is introduced. Regardless of what the developer has or has not done, locally. The team would have also been able to skip manually redeploying the application again. As the application deployment would also be hardened and would have automatically been triggered off a push to their SCM.
Finally, with the addition of a CI/CD. After the deployment happens, you could have additional testing (e.g. smoke tests) which would then use your real services (not a local version) to validate the deployment was successful by sending your /signUp endpoint a payload for a new user. If this failed, with your shiny new CI/CD you could also incorporate the ability to rollback which would move the needle from your latest version to the “last known good” version. That’s the holy grail.
Building a system which can self-heal can be somewhat daunting, especially in a microservice or serverless environment. However, with proper automation in place and a supportive culture. Then you can create a reusable pattern which your entire company can adopt to help reduce errors, increase confidence, and accelerate development.
The real cost savings come when you work to remove the ability for humans to make mistakes and build automation around preventing similar issues from silently happening twice.
- Industry Predictions for 2019
- Best Practices for Serverless Development
- Serverless CI/CD
- Serverless Web Applications — AWS v GCP
- Serverless Impact, Developer Velocity
- Guide, First Serverless Project
What does Serverless Guru do?
At Serverless Guru, we work with companies who want to accelerate their move to Serverless/Cloud Native event-driven development rapidly. We help clients with cloud development, backend development, frontend development, automation, best practices, and training to elevate entire teams. We are engineers first.
What did we miss?
When you leave your answer make sure to either comment below or tweet your answer to @serverlessgurux on Twitter.
Founder — Serverless Guru
LinkedIn — @ryanjonesirl
Twitter — @ryanjonesirl
Thanks for reading 😃