Hi developer, meet Kubernetes!

Ahuvim
AppsFlyer Engineering
7 min readJul 26, 2023

--

If you’re here, you’ve likely heard the word Kubernetes (or K8s). Now you want to know more about it and how you can easily use and think about K8s in your day job.

Recently AppsFlyer, led by its platform team, adopted Kubernetes as the main tool for deployment. As software engineers, I think we should be familiar with K8s, even though it’s kind of a DevOps thing. It will give us a better understanding of what is going on behind the scenes and make us feel more related to and responsible for our deployment.

In this article, we’ll explore Kubernetes (K8s) from a software engineer’s viewpoint. We’ll cover its motivations, principles, and core components, giving you the confidence to embrace this cutting-edge technology. By the end, you’ll be “in the game,” fully equipped to leverage the potential of K8s in your software pursuits. Get ready to level up your expertise in the world of Kubernetes!

One sec, some background you must have

Before we talk about Kubernetes, first, let’s understand what a container is.

The concept of a container becomes clear when we consider a simple scenario: After a developer completes writing code that caters to a specific need, the next step is to package it and seamlessly install it on another host, ensuring our customers can easily install it and enjoy its benefits. How do we pack it up and install it onto another host?

Usually, we have a lot of dependencies like binary code, dependent libraries, and different operating systems, and we need to put it all into one package, known as a “container.”

In other words, we can containerize our code with all the dependencies and run it easily on remote machines or, in engineering words, “deploy our service”

Deployment challenges

So now as we know that our service is shipped using containers, these main questions come up:

  • How can we know that our container service will stay alive without crashing? We want to ensure that if a container goes down, another container will start.
  • How can we ensure this container has enough resources to run? Maybe it took more than it actually needed.
  • How can we manage versioning deployments, meaning that when we upgrade our code, we can do it without downtime? We want to ensure the high availability of our service.
  • How can we make our containers talk to each other?
  • When our requests increase or decrease, how do we scale up or down?

Before adopting “K8s” these questions came up at AppsFlyer, and as a company with a strong platform group, we solved them with several in-house implementations.

For example, to manage the livelihood of the services, we created a process called “Medic” that ensures our service is up and running all the time by continuously sending a GET request to the health-check API.

Another example is that most of our services are deployed on a single instance of ec2 using a docker container and in-house tool for deploying and managing them (“Santa”). This is not shared with any other service, which would waste resources, time, and, most importantly, money.

K8s as a solution

We’ve eventually arrived at the heart of this topic: knowing K8s.

As you can understand from the above, Kubernetes was implemented to solve the challenges I mentioned.

Kubernetes is defined as:
“…an open-source system for automating deployment, scaling, and management of containerized applications” (K8s website)

In other words, Kubernetes gives us a container orchestration for managing our cluster properly, allowing us to deploy, manage resources, and scale applications. K8s wraps up our containers and takes the wheel of our ship.

These are some of the benefits we gain from using k8s and solving the challenges mentioned above:

  • Self-healing container recovery when crashing — Kubernetes provides a health check mechanism. This means we don’t need to implement a health-check API to sample our services anymore, like “Medic.”
  • Automated distribution and scheduling of application containers provide us with efficient use of our node’s resources. Using wise and efficient resource utilization by sharing node instances with several applications.
  • Automated rollouts and rollbacks without any downtime.
  • Service discovery and load balancing help containers talk to each other.
  • Horizontal scaling ensures developers high performance of their application, whether they have low loading or high loading using the application concurrently.

In conclusion, Kubernetes is the best solution for managing containerized applications at scale. With its powerful components and automation, Kubernetes simplifies the deployment, scaling, and management of the application lifecycle. Compared to using Docker directly on one EC2 instance, Kubernetes saves time and effort and provides essential features for managing applications in production.

The most important thing is that Kubernetes saves money for the company. By automating the management of infrastructure, Kubernetes reduces the need for manual intervention and in-house tools, as mentioned above, which can save significant operational costs.

Additionally, Kubernetes can help optimize resource utilization, making it possible to run more applications on the same hardware, which can result in cost savings.

Basic K8s components that every developer should know

The core components of Kubernetes fall into two main categories: control plane components and nodes.

Let’s take a look at the high-level components:

API server

The API server is a central component of the control plane and is responsible for exposing the Kubernetes API and processing API requests. It is the primary way that other components in the cluster, such as the kubectl command-line tool or the Kubernetes dashboard, interact with the cluster.

Scheduler

The scheduler is responsible for scheduling pods onto nodes in the cluster based on the available resources and specified constraints and rules. It ensures that the pods are placed on nodes in a way that maximizes resource utilization and minimizes resource contention.

Controller manager

The controller manager is a daemon that runs on the control plane and is responsible for managing the state of the cluster and ensuring that it matches the desired state. It consists of a number of different controllers, each of which is responsible for a specific aspect of cluster management, such as the deployment controller, which manages the deployment of applications in the cluster.

Cloud controller manager

The cloud controller manager is a special component that is used when running Kubernetes on a cloud platform. It is responsible for integrating the Kubernetes control plane with the cloud provider’s API, allowing the cluster to use cloud-specific features and resources.

etcd

etcd is a distributed key-value store that is used to store the configuration data of the Kubernetes cluster, including the current state of the cluster and the desired state of the cluster. It is used to store data that needs to be persisted across all nodes in the cluster, such as information about the pods, services, and other objects in the cluster.

Within each node, there are two important processes:

Kubelet

Kubelet is a daemon that runs on each node in the cluster and is responsible for managing the pods on that node. The Kubelet takes care of tasks such as starting and stopping pods, as well as monitoring the health of the pods and restarting them if necessary. It communicates with the Kubernetes control plane to receive instructions on which pods to run and how to manage them, and also communicates with the container runtime (such as Docker) to actually execute the containers.

Kube-proxy

Kube-proxy is a daemon that runs on each node in the cluster and is responsible for implementing the virtual networking infrastructure for the cluster. Kube-proxy uses network programming techniques to forward network traffic to the appropriate pod or service based on the rules defined in the cluster’s networking configuration. Some of the main tasks that kube-proxy performs include load balancing, service discovery, and network policy enforcement.

Some thoughts to end off

As developers, it’s crucial to have a comprehensive understanding of the technologies we encounter, regardless of whether they directly pertain to our immediate responsibilities or are managed by a separate DevOps team. This article serves as the perfect starting point, propelling you to dive deeper into the world of K8s. With this newfound knowledge, you can confidently move forward and install K8s on your Mac (utilizing minikube) for hands-on experimentation. GL!

--

--