DevOps == Developer

I have been lucky to enter the software engineering world in a time where people are at least aware of the DevOps idea. I’ve also had the privilege of working at places that embrace the philosophy. There has been no wall to throw code over where it magically gets deployed. I believe that every software engineer has the responsibility to operate their own code. We can debate on whether operations teams serve a purpose (my opinion: they probably do, especially for monitoring), but that’s not what this blog post is about. I’m going to go through my journey into what people call DevOps and the natural progression I went through, all focused around solving problems.
When first learning about the idea of DevOps, I was baffled. I thought it was the obvious way that software engineering worked. Other than a few theoretical lectures on Linux in a C programming class and some copy-paste terminal commands on my shiny install of Ubuntu Desktop, I didn’t have much experience with Linux. I thought that only advanced software engineers had the capability to write application code and then wrangle a server into running it. But I didn’t know that the industry had divided the skills into different roles, neither more complicated than the other.
The natural progression when learning to code was to write something and run it on my laptop. But what really started my journey into what people call DevOps is when I wanted to write an application and have it actually serve a utility 24/7. I had all the code written and could run it on my laptop, but I knew I needed to get it on a server. So, I decided to build a server, throw Ubuntu Server on it and see what happens. After a few tutorials, I had my code running on the server. That little bit of work (only took about a week or two) gave me the confidence I needed to operate my own code.
The next major growth of my skills occurred when I got my first AWS account. It gave me the power to do my code install on a server in an hour rather than a couple of weeks. I started pulling in more features like load balancers, autoscaling groups, and S3 buckets. At this point, I had the tools at my disposal (as an individual) to create and operate a professional application. Over a little bit of time I developed skills in continuous integration which seemed again like a natural progression when writing code and deploying it. If I wasn’t manually verifying things, I needed my CI system to run some tests for me. So I started telling Jenkins to run my poorly written bash scripts.
Right out of college, I worked at a very small company writing software for power plants. We deployed a monolithic Grails application on to Windows servers in the utility company’s datacenter. This was a bit of a buzz-kill for me, since I was deploying things on AWS completely automated. But that’s real life. Situations are different and you have to adapt. That’s why engineers are worth so much, we solve problems. We adapted the pipeline to be as automated as possible by building the .war file, uploading to S3, and emailing our IT liaison from the utility company. We also setup Dev and QA environments on AWS that were completely automated. Everything was very basic. The deployment pipeline for Dev and QA was mainly a script run over SSH to copy the .war file to a Tomcat application server. But it worked and served its purpose. We were no longer spending an hour or two every time we wanted to deploy that application. We didn’t have to even think about it any more.
Later on, I started working at a much larger company where I really dug into the operations side of things while writing code. I was working on a team with significant operations experience and I was a developer. The team was called a “DevOps” team (I hate teams named like that…). There was a clear struggle within the organization about what that term meant. It honestly didn’t matter much to me since I was allowed the freedom to code and learn from these operations wizards. The skills that came from this position was a ten fold improvement to the process I had already been using for a while. The main reason that the improvement was necessary was to support many more projects and therefore pipelines, deployment velocity, and visibility through monitoring. Because of these requirements, my bash scripts with a few SSH commands weren’t doing the job well enough. With the help of many different people in the organization, I started using more specialized tools like Terraform, Ansible, Packer, and Vault. These tools abstracted out a lot of the manual steps I was doing (or skipping) to make the process a lot simpler and at the same time better and more general. I’m not going go into the details of the pipelines and where these tools fit in, but I encourage you to research these tools if you’re on this journey as well. One point of caution: it’s not about learning the tools. It’s about learning the process and solving the problem. These tools made some things simpler, but the process could have been a few Python scripts to do a very similar pipeline. And in a few months there’s going to be a completely new set of tools that are very shiny.
The journey is never over when building these operational type skills. They can always be refined and made better. No one solution will fit all scenarios. Completely new problems need to be solved all the time. It’s not so different from the world of pure software development. There are always problems to be solved when building an application. No approach or framework applies to all situations, but having a large toolbox is certainly worth while. Build out both your coding and operations skills together to make you a better engineer.
I’ve been on the hunt for a new job for about a month now. And one very reassuring theme in my search has been the value of my “DevOps” skills and still being a Software Engineer. Even if you don’t agree with me that all Software Engineers should have operations skills (and not hacky skills, real disciplined professional skills), at least know that employers do find them valuable.
