From Software Development to DevOps — Roadmap to Kubernetes

Martin Mička
CodeX
Published in
6 min readSep 13, 2024
Photo by Chris Linnett on Unsplash

Recently, I ventured into the realm of DevOps to expand my knowledge and improve an existing solution that proved to be insufficient for the project. After considering multiple approaches, I decided to fully embrace and learn the Kubernetes ecosystem. It has been quite an adventure for me, as someone with a strong background in software development and architecture. In this post, I want to share my thoughts on the topic.

Kubernetes is a container orchestrator that is known worldwide as a scalable, highly available solution. It’s been widely adopted and has become the norm in the DevOps world for containerized solutions. My goal was to gain a solid understanding of Kubernetes (K8s for short, which I’ll use from now on) and to form an opinion on whether the hype around it is justified. I opened the documentation and started reading, and I became frustrated pretty quickly — I realized that the learning curve for K8s is steep. Suddenly, I was in awe of all the foreign concepts and the various ways K8s handles workloads, resolves issues, and manages conflicts. It was overwhelming at first. Not to mention, K8s has a lot of abstractions with implementations provided by third-party technologies, which adds even more to the confusion.

Instead of banging my head against the wall trying to make sense of everything, I decided to take a step back and make sure I nailed down the basics. I can’t stress enough how useful this decision turned out to be.

I told myself to forget about K8s for a bit and focus on mastering the underlying concepts first. One technology that is often mentioned alongside K8s is Docker. Docker is also that thing that works with containers, but it’s much simpler. It doesn’t focus as much on orchestration concepts; rather, it’s all about creating and running containers.

After learning the ins and outs of images and containers, I began digging into Docker Compose, networking, and volumes. Prior to that, my knowledge of Docker as a developer was pretty limited. Since I had already been using it daily, the additional learning proved valuable in other areas. I improved my local development environment significantly as I gained more Docker knowledge, and I realized just how great the technology is! It saves a lot of headaches, and I now use it to run my hobby projects in production as well.

With all the knowledge I gained from learning Docker, I felt more confident and ready to dive into container orchestration. Docker has its own offering, Docker Swarm, which I decided to explore. It was a good starting point — it wasn’t as overwhelming as K8s because Swarm isn’t as robust. That’s both its major advantage and disadvantage, and likely why I haven’t seen Swarm used in many projects.

For learning purposes or relatively simple projects, Swarm works fine. It builds on what can be achieved with Compose on a single machine by adding functionality to coordinate and schedule containers across multiple machines (called nodes). It also offers additional networking options, allowing containers to communicate with each other even when scheduled on different nodes. The configuration is similar to Compose, so transitioning to Swarm felt like a natural next step in the learning journey.

Okay! After mastering the underlying concepts, becoming familiar with container virtualization, and understanding how a relatively simple orchestrator works, I was ready to tackle the beast. Concepts that had previously confused me started to make sense, and I began to appreciate the depth at which K8s operates. After learning the foundations and exploring the different workloads K8s offers, I also delved into GitOps. In the end, I successfully launched the project on K8s, and I’m happy with how it turned out.

I found my journey with K8s exciting. It opened up a whole new world beyond just writing code. Of course, containers aren’t the only way to run a production environment, but I find it convenient, and the benefits of this approach are strong for my use cases. I recommend this learning path to all developers, as it provides a bigger picture of the entire software lifecycle. As with many things in life, we may feel overwhelmed by the complexity of something, and staring at the peak can make it hard to imagine reaching it. Instead, it’s better to lower our heads, plan the path forward, and stay consistent. Eventually, we will reach the summit.

When it comes to learning K8s, if you jump straight in without first mastering the fundamentals of containers, it won’t make much sense. Start small, and you’ll get there eventually.

Learning Roadmap

Here’s my recommendation on how to learn Kubernetes from start to finish. I’ll be focusing solely on Kubernetes and Docker. There are plenty of additional tools that can help with certain areas when using these technologies, but that’s a topic for another day.

I recommend starting with the Docker and Kubernetes documentation, as both are very well-written and easy to understand. Everything you need is there. If you find some concepts confusing along the way, just Google it or ask AI — a second or third explanation of a particular topic can often make things clearer.

Docker

  1. Images— What images are and how to create them.
  2. Dockerfiles — Semi-required; it may not be essential for understanding K8s, but it offers great insight into how Docker handles layers and how images work. Besides, it’s the current standard to have a Dockerfile to build an app’s container.
  3. Containers — How to run them, port forwarding, bind mounts, and volumes.
  4. Compose — How to run multiple containers at once, networking, and container configuration.
  5. Swarm — Orchestration concepts for Docker, including node types, workloads (e.g., stacks), and networking in Swarm.

Kubernetes

There’s a lot to learn, as K8s handles a wide variety of workloads. You don’t have to learn everything at once. It’s helpful to have clear requirements for what you want to achieve and focus on learning what’s necessary for your specific goal. You can expand your knowledge from there. Additionally, there’s a lot of reading on theoretical concepts — how K8s operates under the hood. While it’s not required to get started, I found it helpful to understand how K8s handles certain situations when it comes to nodes and scheduling, even though I likely won’t need it, as these problems are usually resolved automatically.

Here are some prerequisites:

  1. Cluster architecture, control plane, components, and their responsibilities.
  2. How nodes are managed and scheduled.
  3. Node problems and rescheduling.
  4. kubectl — K8s CLI.

After that, I’d recommend setting boundaries for your project and start learning about what you actually want to run on K8s. For example, let’s say you want to run a stateless application that is highly available.

  1. Pods — What they are and what they can do; they are the fundamental concept of running containers on K8s. The more you know about them, the better.
  2. Workload Management — This operates at a level above pods. We don’t usually manage pods directly; instead, we manage workloads that control pods. For a stateless application, you might need Deployments, and maybe Jobs or CronJobs. Autoscaling can also be part of this.
  3. Services and Networking — This is a level above workloads. Once the application is running, we need to expose it within the cluster and configure K8s to handle networking, load balancing, and possibly exposing it to the Internet. For that, there are two key concepts: Ingress (with Ingress Controllers) or the Gateway API. There are some differences between them, so I recommend reading about both and deciding which suits your use case.

There are additional topics that might not be essential at the beginning, but I recommend learning about them after you have a solid understanding of the core concepts. Some of these may be required for production hardening:

  1. Secrets
  2. Pod Eviction and Preemption, Disruption Budgets
  3. Resource Requests and Limits
  4. Storage — Volumes and Classes
  5. Network Policies
  6. Node Affinity/Anti-Affinity Rules
  7. Service Accounts
  8. Custom Resource Definitions (CRDs)

And that’s all! There’s certainly a lot to take in, but I hope this roadmap is useful to you. Maybe you already have some experience, so you don’t need to start from scratch. Thanks for reading!

--

--

Martin Mička
CodeX
Writer for

I'm a working professional in software development, currently working as a Chief Technology Officer of Nelisa.