Kubernetes Tutorial: Part 1 — What the Heck Is Kubernetes?

There are only 10 types of people in the world: Those who understand binary and those who don’t.

Logos of Kubernetes and Streamlit

This is the first part of the five-part series — “ From Sandbox to K8S: Deploying a Streamlit based object detection application using Minikube.”

Why should you care?

Deploying a machine learning model (or for that matter, any piece of software) into production is one of the most crucial steps in developing AI-based solutions. There is no point in working so hard to create a model that gives you 99.9% accuracy (every ML engineer’s wish 😅) but sits in your sandbox without seeing the limelight 😛. To fully reap the benefits of AI, one should be aware of how to make these models accessible by the end-user at scale, and without much of a hassle.

As discussed at the beginning of this series, Docker-based deployments had taken centre stage roughly about half a decade ago, and they continue to exert their domination in the deployment world. But at the same time, Dockers comes with its own set of drawbacks. Since Dockers run on a single node, it becomes quite tricky to scale up or scale down the application on a need basis without a container orchestration technology. Also, Dockers don’t provide service discovery features nor auto-healing of containers in case if one of them becomes unhealthy nor seamless updates/rollbacks without much of downtime and the list goes on!. And that’s precisely the void which a container orchestration technology tries to fill.

A comic description illustrating the drawbacks of Dockers

When we talk about container orchestration technologies, today we have a plethora of options to choose from, like Amazon Elastic Container Service (ECS), Kubernetes (K8S), Docker Swarn etc., a more elaborate list can be found here. Out of these, K8S is one of the most popular technologies that is open-sourced. At the time this article was written, Kubernetes GitHub repository had over 2800+ developers, and over 94K+ commits were made.

Source: Kubernetes GitHub repository

In this tutorial series we will resort to Minikube, a tool that helps us in running Kubernetes locally with just a single-node, it’s an excellent tool for learning and understanding how Kubernetes works without having to worry about the need to have access to a cluster of nodes.

Kubernetes Architecture

Before starting to build our application, let’s have a high-level understanding of the Kubernetes architecture. In any Kubernetes cluster, there is a master node (a node can be seen as a VM) controlling the entire cluster and worker nodes which performs the heavy lifting of running the applications.

Diagramatic representation of K8S architecture

A master node is made-up of the following entities:

  • Kube-apiserver — It exposes all the controls of a cluster to the developer who can send instructions to the cluster using command-line utilities like kubectl or a web UI dashboards. The API server is capable of intercepting RESTful requests
  • etcd — It is a distributed key-value store that enables Kubernetes to persist the cluster’s state. People who are familiar with Linux OS might be aware of the ‘/etc.’ directory which stores all the system-wide and user-specific configuration files that are required for running the machine. etcd does the similar job here, just that now it has to do for a cluster of machines which are distributed, hence the d in etcd
  • Kube-scheduler — It is responsible for assigning jobs (pods to be correct, will introduce the term soon) to different nodes of the cluster after analysing the cluster’s state data. Kube-scheduler gets information about the cluster’s state from the etcd datastore. The scheduler then ranks each valid node and creates the jobs to the most appropriate node respecting the hardware, software and other resource constraints.
  • Kube-controller-manager — They are daemon processes that run in a non-terminating loop. Their primary job is to monitor the current state of the cluster and take corrective action if the desired state of the cluster and it’s current state are not in agreement.

Out of all these entities, it’s only the Kube-apiserver that can communicate with the etcd datastore, other entities like Kube-scheduler and Kube-controller-manager gets information from the etcd store via the Kube-apiserver.

Now let’s take a sneak peek into how the worker nodes are structured. A worker node is made-up of the following entities:

  • Container Runtime Interface (CRI): The primary objective of a CRI is to run and manage the containers that are scheduled on a node by the Kube-scheduler. Docker is the most popular CRI that is used in K8S, but support for other CRIs such as containerd and CRI-O is also available in K8S.
  • Kubelet — It is the agent that executes the commands in the node that are received from the master. Also, it relays the pieces of information regarding the health of the running pods in the node to the master for it to make appropriate decisions.
  • Kube-proxy — It is a proxy agent that maintains the networking rules on the node by implementing the K8S’s Services (more to come on this later!) concept. It makes the communication possible between the pods of the same cluster or different cluster possible.

With great power comes great responsibility! — Uncle Ben Spider-Man.

Kubernetes has already made its presence felt in many organizations across the globe, cutting across domains from finance to entertainment, from sporting to travel, many companies have benefited greatly by using this technology. Case studies on how different organizations used K8S is extensively documented here. There is a famous saying which goes like this, “Nothing comes for free!”, and that is one hundred per cent true in the case of K8S. Migrating the applications and services from the legacy systems to K8S is definitely not going to be a cakewalk (at least not at the time when I am writing this series!). There is a lot one can learn on how organizations planned these transitions, the best practices and the challenges that they faced during this endeavour. Sarah Wells, Technical Director for Operations and Reliability, Financial Times, gave an excellent talk on KubeCon 2018 on “The Challenges of Migrating 150+ Microservices to Kubernetes”, she clearly outlines the procedure they followed during this migration and also provided an excellent cost-benefit analysis.

The Challenges of Migrating 150+ Microservices to Kubernetes

Installation

Installing Minikube is a two-step process. First, you need to install Kubernetes command-line tool kubectl that allows you to sent API requests to the Kube-apiserver and then you can install Minikube that sets up the single node cluster (a paradoxical term 😛) in your machine. Instructions on how to install kubectl and Minikube is well documented here and here. Please note that we will be using Docker as our CRI, so make sure Docker is installed as well in the machine.

With this introduction, we will now move on to the next section, where we will quickly go through the Streamlit object detection application that we will be deploying using Minikube.

Part 2: Streamlit based object detection application

The Startup

Get smarter at building your thing. Join The Startup’s +730K followers.