What’s all this talk about Cloud/Container Native & Cloud First and how does it relate to DevOps?

Phil Dougherty
ContainerShip Articles
5 min readJan 29, 2016

In the last two years I’ve noticed a lot of posts, webinars, and meetup groups dedicated to talking about building software that is “Cloud/Container Native” and I think it’s great that more focus is being put on the topic. With containers and orchestration being all the rage, it’s important to discuss the developer side of the DevOps contract, and how to build software that will function properly in a more automated world. That is what building cloud native software is all about. Taking steps during development to ensure that applications are being architected in a way that will enable things like logging, metrics, scaling, and more to snap in to place. It’s exciting that as an industry and a craft, we are moving in a direction that makes life easier for everyone involved, but it was not always like this, and we still have a long way to go before every team can reach DevOps nirvana.

A Decade Ago

Starting almost 10 years ago I was working at a web hosting company with offices in Brooklyn, NY that did fully managed dedicated server hosting. We were a small team, under 10, mostly system administrators, with some big traffic customers. It was an awesome first “real” job. We used remote hands at 8 global data centers for small hardware issues, and sent the founder’s brother around the world to build out new locations and do major upgrades. When on-boarding new customers we did complimentary migrations from their old host to us, which typically involved studying their existing configuration and network in detail, building a script or puppet manifest to do it repeatedly, rsync’ing the data (like videos, images and content), migrating the databases, and then switching DNS to point to us. Depending on the size and complexity of their existing infrastructure, this could take a few hours to complete, or in some cases many weeks of back and forth and testing.

In reality all we were doing was:

  1. Allocating server resources to run software and store data on (Provisioning)
  2. Understanding their software and getting it to run (Deployment)
  3. Testing everything to make sure it worked (User Acceptance Testing)
  4. Enabling traffic (Production)

We eventually got much faster at completing all 4 of those steps by utilizing virtualization and configuration management, but never got to the point where a user could onboard themselves, allocate resources, get their own software to run, test it, and cut DNS. There were just way too many knobs to turn in the network and operating system, or weird one-off special cases and things to know to make that feasible. The customer wouldn’t know where to look or what to do if they had problems with #2.

Fortunately for the founders of that company, they went on to create DigitalOcean which has solved those problems using virtualization and by creating an engaged community that has churned out a vast library of tutorials and content on system administration. In other words, a lot more people are being trained in the classic art of unix system administration to solve their problems. That’s wonderful and makes me happy because I’ve been a Linux nerd since childhood, but the approach doesn’t scale when you move past small hobby apps and standalone VM’s. You’re gonna need MORE POWERRRRR to support a growing development team and infrastructure.

Making Progress

The above story is a pretty good example of a throw it over the wall kind of relationship between the development team and operations team. The customers at the web hosting company were the developers, and we were the ops team. When I started my next job leading a web operations department at an eCommerce agency, I often thought of the large in-house development team as customers at the hosting company. They came to me with code I knew nothing about and a bunch of dependencies, and I had to create cloud instances (provisioning), automate the setup, dependencies, and deployment of their app (deployment), provide a way for it to be tested (uat), and then enable production traffic.

In that environment we had automated provisioning using AWS CloudFormation and Chef, deployments using Jenkins and Chef, and testing using a combination of Jenkins and manual quality assurance. It was a major leap forward in terms of speed of deployment and reduction in the number of manual “knobs” that needed to be turned for new code to be pushed to production. Developers learned the various options that could be set and a pattern for interacting with the underlying infrastructure which allowed them to get very far along with #2 from above “Getting the software to run” on their own. We were “doing the DevOps” as best we could with the tools and patterns available at the time. Unfortunately, those options and knobs and developer understanding were specific to the hosting system we had developed at the company. That knowledge wouldn’t be incredibly useful to those same developers as they moved further along in their careers or took new jobs elsewhere. We were still “throwing things over the wall” except now the wall was a little bit lower and we could see each other’s side and have a better understanding of the challenges each group was facing. The real goal was to eliminate the wall completely, and create a fully self service system that created a level playing field for every engineer in the company. Achieving the real dream of DevOps in the process.

Fast forward to 2016

Now that 2016 is upon us, a lot has changed, and there is so much to be excited about for both Developers and Ops people. The light at the end of the DevOps tunnel is shining bright. Why?

As much as DevOps is not about the tools, additional layers of abstraction through tooling has made distributing the automation workload out to the entire development team easier than ever before. Thanks to standardized approaches to developing applications, like 12factor, containerization technologies, and cluster schedulers, developers are able to follow a process to automate their applications and get them deployed without worrying about the millions of knobs that used to have to be turned in the config management system, network, and server operating system.

This will have a major impact on businesses of all sizes that adopt these practices in the years to come. No longer will there be the bottleneck of the “configuration management team”, or the unrealistic expectation that every member of the dev team has to learn the config management system you chose, which in my experience rarely ever happens.

Standardization brings with it an easier transition for new employees into the organization, letting them hit the ground running with automating, deploying, and managing their own applications. This translates to faster releases, smarter and more capable developers, and less fear about the 2 people in the company knowing how things work getting hit by a bus.

What is the next step?

We’ve already made so much progress from the days of manually configuring dedicated servers or writing gnarly scripts to do it for us in our own “special snowflake” kind of way. It’s exciting to imagine where the future will take us. What do you think is next for this fast evolving world of server and applications automation? We’ll have to wait and see, but we won’t have to wait long. Leave a comment to let me know where you think we are heading!

--

--

Phil Dougherty
ContainerShip Articles

Co-Founder @containershipio, Husband, Systems Engineer, Manager, Pittsburgher, Pitbull lover.