An old sysadmin’s journey into the world of Kubernetes. Learning new things the old school way!

Yair Etziony
Polar Squad
Published in
6 min readJan 30, 2019

Or How I Learned to Stop Worrying and Love Kubernetes.

Who am I

I am Yair Etziony, I am old school, I have worked in IT for more than 20 years, I started in the Israeli army, and kept going. I am a cynical skeptic system administrator, or as we are sometimes called nowadays a “DevOps” engineer. I started with Vax\Vms, Dos, NT 3.51 and Novell net ware. Nowadays I work as a DevOps consultant for Polar Squad in Berlin Germany. Polar Squad is the best DevOps company (according to us). In the following series of blog posts, I will try and guide an old sysadmin on how to work with Kubernetes.

Why?

When I started learning about all the new tools and methodologies that are used today, I felt a bit overwhelmed. I worked with containers in the past (good old LXC and Proxmox), so my first notion of it was: Why should I learn Docker now? Why should I care about Kubernetes? What is it good for?

When I started studying it, I found out it’s bloody hard to actually make sense of it. I mean, I am not a genius (my mom says I am, but not sure how valid her view is), but I was able to learn many things in my life. I mean I could learn Martin Heidegger’s “Being and Time” at University, I could also learn many concepts in computer science and networking for many years. So I asked myself: Why is Kubernetes giving me so much trouble? How can I try and fix that?

What’s the purpose of this blog post series?

I found out that there is a huge gap in the documentation about Kubernetes that is available online. If you want to start something easy and just see a simple deployment, you have a lot of options. You can use Minikube or Google Cloud and have something running very fast.

If you want to create your own cluster on any infra, may it be VMs on AWS or KVM server running on bare metal, it’s almost impossible to find good documentation for that. I am not new to hypervisors nor to namespace or cgroups in Linux, (I was also Solaris guy, so I worked with Solaris Zones a long time ago) but still, some of those new tools I checked really got me frustrated. The purpose of these blog posts is to guide old school sysadmins to understand and use Kubernetes without making them bash their heads on the wall and curse horrible words.

Background

In the good old days, we used to order a machine. When it arrived, one of us would install it with the correct OS and packages, then connect it to the network and use it for what it was. All the servers back in those days were “snowflakes”. Sometimes we had some Bash scripts to help us automate things.

When I worked for Qlusters back in the days (we were a revolutionary startup), we could use a lot of old hardware servers, and use those as a private cloud (no one called it that back then though), and we could deploy an application and packages on various machines using PXE boot with a unique Bussybox kernel with DHCP.

A couple of years forward and in another company I met the public cloud for the first time. We had VMs running on AWS with some S3 buckets for data, we still used AWS as a hardware provider, and we build things around with Perl and Bash.

I retired from working in IT at some point and moved to Berlin. Time passed and I decided to come back, but things were changing.

New tools and roles arrived: Docker, Kubernetes, CoreOS, and Terraform, just to name a few. Git replaced SVN as a version control system, and suddenly sysadmins no longer had a small folder with handy scripts, as we use version control now. I started to understand that we would want to stop using servers as pets, but instead, we want to treat them like cattle.

This means snowflake servers are now a bad thing and using SSH to maintain a machine is bad practice. We think about our servers as something we build and destroy with a blink of an eye. We try to use version control as our source of truth (I am not sure that this is really possible, but at least we strive for that).

I know I had to change things. I wanted cool stickers on my laptop, I grew a beard, I went to meetups and I even starting enjoying “Rick & Morty”.

After some deployments on some good old hardware servers or Centos VMS, I started to accept the fact that this model is problematic. I started to test Docker and Docker compose and found out it’s actually a good utility for your environments. I was tired of software that runs so nice in one environment, but would always crash on staging and production. The next step was to try and orchestrate containers. Since this is our next logical step, how can we control our environments running the containers?

Kubernetes, what are you?

At my core I am a historian. That is what I studied in university, and not just any history, I am an expert in the history of thought. My interest was always to understand how ideas change over time.

Kubernetes was originally called “Borg” and it was developed by Google as they needed something to manage clusters for their own data centers all over the world.

Google’s Borg system is a cluster manager that runs hundreds of thousands of jobs from thousands of different applications across a number of clusters each with up to tens of thousands of machines. Later “Borg” turned into “Omega” and after some time it was open sourced and named Kubernetes, which is Greek for helmsman of a ship.

In today’s IT world everyone is talking about Kubernetes, but not so many people have used it in production or at all. Kubernetes offers a few levels of abstraction to a developer and an operations engineer, and it decouples the application from the hardware. The abstractions in Kubernetes allows you to deploy containerized applications to a cluster without tying them specifically to individual machines.

When you think about this, it is an amazing feature, since Kubernetes can be deployed on any kind of hardware or cloud service, its self-healing, and it can, if you set it in a correct way, do a lot of the hard work that operations people used to do. Fact is, it might take us from DevOps to NoOps, but that is a different topic.

Take it or leave it, Kubernetes offers a new model for operations people to deal with. And I think this model will be the one that most of the new applications will use. Even if there will be a successor to Kubernetes, it will use some of the same concepts, so learning it is crucial.

So what’s the problem? Why is it so hard?

Kubernetes brings a lot of new terms to the table. When you start working with Kubernetes you will hear new terms such as pods, context, pod network, and many others. Fact is, it has two different networks, the node network, and the pod network. If you know a bit about the container network, you can guess that it’s not a simple idea.

Why should it be simple? Kubernetes provide one layer of container abstraction called pods, but that’s not enough, since containers are ephemeral, we need another layer called service, to enable Kubernetes to define the principles of how to control the application. Don’t worry i will get deeper into the concepts of Kubernetes in the following blog posts.

It’s not just that, but Kubernetes comes with a full ecosystem of tools. Some of the tutorials use them to bootstrap a cluster. I could name a few: Skaffold, Istio, Helm and many more only make things even more frustrating.

Sometimes it feels that the people who wrote the tutorials for Kubernetes are sure that the readers know it already, so they forget to explain stuff.

What are the next steps?

First, we would bootstrap a cluster with a very minimal set of tools, like kubeadm which is the bootstrapping tool for Kubernetes. Then we will learn about the Kubernetes concepts and why we need them. After that hopefully, we can deploy an application and ingress it to the world!

I hope that when we’re finished with this series, we can all put some cool stickers on our laptop and at last show, that the old school is the best school!

--

--

Yair Etziony
Polar Squad

More then 25 years in the field, started with VAX/VMS and now working in the cloud. DevOps, SRE, culture and people.