DevOps is a State of Mind, Not Just a Role

A Story in Three Parts

Darien Ford
Capital One Tech
5 min readAug 3, 2017

--

BVT — Before Virtualization Technology

In times of old, there was Development and there was Operations. And things were well…

Developers were responsible for building the software and Operations Engineers were responsible for making sure the Developers didn’t burn down the world. And the world didn’t burn down (usually).

But that doesn’t mean that things were necessarily easy.

Things were expensive. Hosting costs, hardware costs, maintenance, upgrading, etc. When something was put on the infrastructure, it was not insignificant. There were limited resources in terms of hardware and bandwidth. You had a three-year contract on your servers. Needed more RAM? No problem! That will be another $15,000 per year. Planning for an environment often meant sizing the hardware 3x your expected peak load, leaving your hardware mostly under-utilized.

Deployments were difficult and time consuming. You were lucky if deployments were a set of scripts which Operations would have to run. Many times, “It worked on my box” was the answer you got when something didn’t work in the datacenter. This left the Ops team to figure out exactly what was going on. Some deployments took days depending on what hardware was necessary and how many problems were encountered.

In other cases, developers were the ones doing the deployment, there was no separation of environments, and things were risky. Working directly in production was not out of the question, and the potential to take down the company’s primary systems were real.

Word documents were not unheard of, listing out each deployment step meticulously; or not. Operations and Development were often separated from each other to such a degree, animus was real. Distrust was rampant.

Places where things were more structured, there were often extremely long lead times on deployments, needing multiple sign offs and touch points to get something approved to release. There was significant overhead which in many cases included release engineers, hardware folks, and network engineers.

And then virtualization came along. As Operations teams began getting virtualization, they could automate more and more things through scripts and other means. Virtualization allowed Operations teams host multiple systems on a single physical server, and manage the lifecycle of those systems without touching the hardware

VTE — Virtualization Technology Event

Virtualization brought about many things; cost savings, operational simplicity, simplified resiliency and recovery, and increased ability for Operations teams to use code to describe and manage their systems to an extent previously not considered.

With virtualization, hardware became a commodity, no longer the significant cost of yesteryear. You could have an army of virtual servers utilizing a much higher % of the underlying hardware than before. Environments for developers could be spun up and down from snapshots. This enabled teams to begin to trust each other more, even if it was only a little bit.

Instead of ruining a physical machine environment, a developer was only able to impact a virtual machine. This reduction of blast radius allowed the Operations team to ease up a little bit. Restoring the system could be as simple as restoring a snapshot of that virtual machine from a prior state.

The divide between Development and Ops began to ease a little. Teams on the infrastructure side were starting to look a little more like developers. Things were not perfect though, and there was still separation in most cases.

PVE — Post Virtualization Event

As organizations became more comfortable with virtualization and the understanding that you didn’t need to nor should you care about individual underlying hardware became main stream, toolchains started to pop up to help developers do everything themselves.

Shell scripts, Ansible, Puppet, Chef, Salt, and Cloud providers enabled Developers to do everything that an Operations Engineer previously could do, setting up subnets, load balancers, virtual machines, databases, private networks, and many more, without having to purchase and provision hardware.

But with great power comes great responsibility. Not every organization or developer was, or is, ready to take on the responsibility or with the knowledge of best practices. Some organizations still have a separation of groups, where Operations now has a subdivision called DevOps. Others have simply renamed Operations to DevOps, in many cases for recruiting reasons.

The ultimate goal is to manage environments for software products as efficiently as possible so teams can deliver value, in a reliable, resilient, and secure manner.

The Future (Is Bright)

As teams become empowered to get more things done, they have also accepted more responsibility, including for their infrastructure. With the maturity of tooling available today, an end-to-end environment can take mere hours — if not minutes — to configure, provision, and get running.

Teams need to expand their knowledge and understanding beyond the traditional focus of their individual roles. That is — developers need to understand when to use a load balancer, why you block all incoming ports except 80 and 443, what SYN and ACK are, what subnets and route tables are, or what a reasonable backup scheme is.

Fortunately, there’s a massive amount of information out there on all of this. But gaining the appropriate experience can be time consuming and hard-won in some cases. Having representative experts in infrastructure, networking and security, and system architecture, working as part of the development team is a great start, but ultimately the goal should be for anyone on the team to handle operational items.

As individuals begin to learn and become experienced in all of this, their way of developing solutions will change. Infrastructure will now become part of their thought process; “How will I handle log aggregation?” “How does my service discovery handle new services?” etc.

The default state should be one in which everyone is familiar and fluent with the concepts and have the necessary understanding to handle configuration management and automation. Everyone should want to ensure their product is stable and running, rather than pitching a piece of software over a fence for someone else to worry about.

DevOps should be a state of mind for your team, not a role for someone to fill.

DISCLOSURE STATEMENT: These opinions are those of the author. Unless noted otherwise in this post, Capital One is not affiliated with, nor is it endorsed by, any of the companies mentioned. All trademarks and other intellectual property used or displayed are the ownership of their respective owners. This article is © 2017 Capital One.

--

--

Darien Ford
Capital One Tech

An engineer with a passion for agile practices and distributed systems