Working Through a CI/CD Pipeline

Curtis Blackthorne
DevOps Dudes
Published in
6 min readMay 5, 2020

It’s really hard to not compare yourself to your coworkers, and it is even harder to not compare your workplace processes to those that you read about online. I remember watching the 10+ deploys a day video by John Allspaw and Paul Hammond 11 years ago and looking at my Powershell scripts through sad eyes. How were they deploying 10 times or more a day?!?! I could maybe get one to dev, two if QA wasn’t busy. Any more deployments just seemed like a pipe dream. Yet today, if I’m not deploying 10+ times a day, I must have had a lot of meetings and I didn’t get to write as much code as I would have liked.

But pay attention to the timeline on that, 11 years. When I first got into the DevOps space; I was a .NET developer who either had to learn how to manage my own servers, or wait for free time from our sysadmins. The problem was that we were central IT, so we were the cobbler’s children. We got pushed to the back of the line over customers getting help first. So we had to learn quick, fast, and in a hurry. And over the years, I’ve worked at many places and have made many friends in the DevOps space. And a lot of us seem to have the same progression when it comes to building a CI/CD pipeline. So as you read, just know we were all there at some point, and you’ll get here too.

Sepia toned hands with carving tool over wood
Gotta make sure this is just right

Hand Crafted Boutique Artifacts

Now when I hear hand crafted, boutique, or personalized anything I instantly think expensive. And that applies to generated and deploying artifacts as well. When somebody has to manually build code, that takes time. When they have to manually deploy it, that is even more time…which only grows exponentially as you get more servers. When you’re first starting out, it is not out of the ordinary for somebody to build an artifact on their local machine. Most likely a Sr. dev with control issues. This person probably also has the “secret sauce” configured on their machine. If you’re lucky, they remembered to build the artifact before the weekend and emailed it to you. Chances are this deployment process will fail… a lot. And require this dev being called for every major deploy because something has gone wrong. A setting missed, wrong artifact, or forgot to actually build the code ahead of time. Usually it only takes a few real bad deploys for you to move to the next level.

Newton Cradle swinging from the right
Once I hit start this just goes

Scripts, not just for actors

Maybe at this point you have a build server, maybe you’re still using that Sr. Dev’s machine. But you are tired of having to copy and paste code and update config files in notepad or with Vim. Maybe the deploy isn’t hard, but you’ve got 10's….100's….1000's?!?!? of servers that code has to be moved to. So you write some shell to ssh out to all your servers. It still takes time to do all of this, but it’s not you doing it. You can hit enter, and come back later. Hopefully it didn’t error out otherwise you’re in for a long night of troubleshooting. You might have a bastion server at this point, it might be from your local machine. Either way you have to start your scripts manually and watch them in case they fail.

Two cardboard robots shaking hands
Hello other fellow human

The Robots are coming

The next logical step in your progression is to have a CI Server. Might be Jenkins, TeamCity, Bamboo, or GitLab. Most of the time when you’re starting with you CI server you’re doing very basic building. This is how you’re making sure committed code is able to be built. You are probably starting to realize that these CI servers are just fancy ways to run a script. So why not have it run your scripts that you’re running manually? Congratulations, you’ve now turned your CI server into a CI/CD server. It might not be pretty, and it takes a bit of work before every deploy. But hey, now even the CEO can deploy code on a button press. This is where you can start to have some fun.

Increasing your Flexibility

I started as a developer and grew into a systems engineer. But a funny thing happened when I started embracing DevOps Culture. I started to write way more “code” than I ever did as a traditional developer. To start with, you will look at the scripts that your CI server has been running and start the process of refactoring them to use variables instead of hard coded values. This allows more flexibility in how a deploy is ran. You can now start playing with using config files for each environment and replacing parts of your scripts. Maybe host names, file paths, emails, phone numbers, etc. At this point, your deploy scripts should have error handling, maybe even you have unit tests. These scripts should be autonomous and not need a human to run. But you might still have somebody pressing the button for prod. That’s fine, as long as it still follows the same process as your lower environments.

What is code?

So we’ve been talking about deploying code this entire time. And this has probably meant actual Java, C#, python, ruby, uhhh COBOL? And you would be correct. Now that you’re deploying traditional code, it is time to start deploying your infrastructure. But why? Our servers are there, and they are alive. Why mess with perfection? When you’ve gotten to this point, you’re probably looking for anyway you can make deploys faster, or at least feel faster. The best way to do that is to deploy all new infrastructure with the code and then move your traffic. We can go into the specifics of that later, but for now you should be looking into writing modules and runbooks with tools like Packer, Ansible, and Terraform. This will allow you to start playing with your environments in new ways that you didn’t think possible. Now if you’re working in an on-prem environment, chances are something like Terraform isn’t an option. But you could still use Packer and Ansible to make golden images for your servers.

Man peeking through blinds of a window
Never know who is looking at your passwords

Keeping Secrets

Usually the last step in this process is what do we do with all these keys and passwords we have now? Store them in a text file? Make sure we know who has access to the repo with these values hard coded? InfoSec seems to always take a back seat… And now is just as good a time as any to start finding ways to store your secrets in an automated way, and even use them without ever seeing them. Using Parameter Store in AWS, or standing up an instance of Hashicorp’s Vault will give you the ability to use passwords and never actually need to know what they are. This gives you just that little bit more of added security that makes InfoSec happy.

Long Road Behind You

So that was a lot, I know. But after being at this for 11 years, I’m just now really getting into better secrets management. So if you’re telling yourself, “I’m way ahead of this guy and only been at this a couple years”, good job! And if your just starting out, know that we had to work our way here. Sometimes out of necessity, other times for fun and bragging rights. I like to think that the first time somebody stood up an all new second environment with an application ready to go and diverted traffic, did it just to win a bet. And sometimes that is what sparks creativity, just a little competition. And that bet has made a lot of lives a lot more interesting.

--

--

Curtis Blackthorne
DevOps Dudes

DevOps Champion @ a large financial institution. DevOps practitioner for over a decade in Finance/Gov space. Process improvement specialist