How To Become a DevOps Engineer In Six Months or Less, Part 5: Run

Igor Kantor
7 min readOct 12, 2019

--

Speed of traditional deployments. Photo by Krzysztof Niewolny on Unsplash

Quick Recap

Let’s quickly take stock on where we are in our DevOps journey.

In Part 4, we talked about how to deploy your packaged code:

For reference, this is where we are on our map:

Month 5 of our DevOps journey

If you dedicate about a month learning each part, this is Month 5 of your DevOps journey.

Are You Ready to Run It?

OK, we have our code written, packaged, and deployed somewhere.

Here, I’m going to ignore deploying code as immutable machine artifacts (EC2, for example) and focus on containers.

Why?

Because baking an immutable AMI at source, copying it everywhere and running it then is difficult. It is a really good pattern to use if you absolutely have to.

But I would strongly urge you to consider whether you truly need to and if not, try to re-organize your microservices as containers or serverless functions.

NOTE: if that is not possible, maybe because you have chosen to write, package and deploy your software as a monolithic application, then by all means, please consider the immutable AMI pattern. Or, for example, if you must run non-cloud native workloads or commercial software that comes “as-is”. However, that’s not the focus of this post.

Not the focus: https://dzone.com/articles/not-wanted-comic

OK, so we have our containers are neatly packaged, how do we actually run them?

Actually Running Containers

Well, the simplest thing you can do is simply do a docker run myImage and call it a day. But that would not be a good idea.

Why not?

Because what happens

  • if that container dies?
  • if you need to have more than one to handle the load?
  • if you need to implement zero downtime deployments?
  • if you want to have full visibility into your microservices?
  • if you want to have a CI/CD pipeline to deliver value to customers quickly?
  • etc…

In other words, what happens when you need to build a true, enterprise-grade, distributed application?

Clearly, something as primitive as docker run is not going to cut it.

NOTE: docker-compose suffers from a very similar set of problems. Docker-compose is not meant for production deployments, its purpose is local prototyping or rapid functional testing or very small scale (think personal home) deployments. Not revenue generating customer workloads.

OK, so if we can’t do the obvious, what are we supposed to do?

Container orchestration to the rescue!

Container Orchestration Landscape Overview

Like with everything else in life, there is more than one answer to this problem.

First and the most obvious 900lb gorilla in the room is Kubernetes.

Just scream, “KUBERNETES!!!” when asked about your tech strategy.

Born from an internal project at Google, Kubernetes is now almost the de-facto standard of container orchestration.

Also, it is pretty much the only answer if you are running in

  1. a private data center
  2. Google Cloud
  3. Microsoft Azure
  4. any other public cloud

However, if you run in AWS, you have another option — ECS.

NOTE: that’s not strictly speaking true. You have Nomad from Hashicorp (the same people who brought you Terraform), you have Docker Swarm from Docker. Problem is, these are very much niche platforms with minimal adoption, so we are ignoring them for the purposes of a rapid career growth.

Anyway, back to ECS. AWS Elastic Container Services (ECS) is fairly simple to get started with, enjoys tight integration with the rest of the AWS eco-system, and does a few things but does them well. In short, it is pretty much the antithesis of Kubernetes.

In fact, if ECS was good enough for McDonald’s, it’s probably good enough for you.

However, purely from a career-building perspective right now, there’s no question that Kubernetes is a better choice. Even though I’m willing to bet 99% of enterprises running in AWS will be just fine with ECS.

So, now you have a choice to make. If you are completely new to the field, steel yourself for a long slog through the Kubernetes land. It is not easy to learn on your own, outside of a team of like-minded DevOps engineers who can support you on your journey.

DevOps engineer figuring out Kubernetes RBAC

But it’s definitely possible, especially with Google and AWS free tier offerings, YouTube / Udemy tutorials, and AWS spot pricing.

NOTE: If you choose to go this route, I recommend you start with GKE free tier or kops running AWS spot instances. Amazon’s managed Kubernetes (EKS) costs money and although suitable for prod workloads, is not a good place to start. And I don’t know enough about Azure to recommend it.

However, if you are not new to the field and actually work within the AWS ecosystem, my advice is to get your microservices containerized, deploy them to ECS, enjoy good nights of sleep and in parallel work on building out a world-class Kubernetes platform.

Because diving head-first into Kubernetes will involve yak shaving the likes of which you’ve never seen, and it will inevitably distract you from your true mission — delivering value to customers quickly and effectively.

Kubernetes — Do I Really Need It?

No.

Kubernetes — But I Really Want It!

Sigh… OK.

I get it. The market has spoken and it’s either Kubernetes or go home.

Let’s quickly take a look at what you are signing up for.

First, despite the impression of a cutting edge tech, the idea of Kubernetes is relatively old. When Google took the wraps off Borg (the precursor to Kubernetes), it was 2015. Moreover, it was a fairly old idea even then.

From the abstract:

We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it.

Read that again. In 2015 (!), Google was sharing the details of running a Kubernetes-like system for over a decade.

They are not shy about it either. It is literally the first sentence on Kubernetes home page:

Innovative new tech from Google

So, next time you hear someone presenting Kubernetes as some hot new idea about to take over the world, just remember — they are advocating for technology that is now at least fifteen years old.

Not very innovative, is it?

Second, think about the target audience. Google builds tools to solve Google problems, at Google’s scale. Again, Kubernetes home page is quite clear about this also:

Do you operate on “Planet Scale?” Run billions of containers a week?

Finally, one of Kubernetes’ original developers and its most vocal advocate, Kelsey Hightower, stresses this point also:

https://twitter.com/kelseyhightower/status/1099710178545434625?lang=en

So, if you operate at planetary scales, or are running billions of containers per week, or building a cloud for others to use, Kubernetes is the right choice.

If you are not, then it isn’t. End of story.

And no, I don’t care if your grandma read a bunch of Kelsey’s tweets on a lunch break and then converted her boutique flower shop website to Kubernetes in a week, with CI/CD and Automated Canary Analysis. It’s still not the right tool for the job if your requirements do not demand it.

Run Kubernetes At Home

Multipass + Kubespray

If you truly don’t want to spend any money to practice running Kubernetes in a public cloud, fear not! You can do it at home quite easily.

How?

With an old Linux box, multipass and kubespray.

What is multipass? It is

A mini-cloud on your Mac or Windows workstation.

And kubespray is a set of ansible playbooks to deploy Kubernetes.

Basically, you grab an old PC and install Linux on it. Make sure you have at least 8G of RAM or the whole exercise is not really worth it.

Then, you install multipass. This will give you virtual machines running on one server. They will have their own IP addresses and everything. You can ssh into them individually and install whatever software you need.

Once that’s done, download kubespray and use it to deploy Kubernetes onto your own mini-cluster at home.

NOTE: You will need the ssh private key to get ansible to deploy Kubernetes. It is located in

/var/snap/multipass/common/data/multipassd/ssh-keys/id_rsa.

Multipass + k3s

Another alternative is to run multipass with k3s. The latter is a very lightweight k8s distribution that makes it very easy to start playing around with kubernetes.

Here is a sample bash script to get you started:

Final Thoughts

Whether Kubernetes is likely to go away (to be replaced by something like AWS Fargate or serverless Lambdas) or not, does it really matter?

Kubernetes = $$$. So, level up, enjoy the ride and let me know about your experiences.

Onto the final step: Observability!

--

--