What is Kubernetes?

Ahamed Raaseem
Apr 27 · 6 min read

Kubernetes provides the fundamentals and building blocks to construct a usable platform for your team to develop and release to.
Users can administer their Kubernetes clusters through a graphical user interface as well as an imperative and declarative command line interface, designed to manage the entire life-cycle of the containerized applications and services.

It is possible to scale the applications up and down, perform rolling deployments and manage which services should respond to certain requests. It also provides an extensible framework to develop on to allow the team to extend the core kubernetes primitives to their hearts’ delight! As well as the option to create their own concepts.

As with most frameworks, One of the downsides though is that it is missing a lot of specific pieces of functionality out of the box to be classified as a turn-key solution. In the standard distribution, It doesn’t include a method on how services speak to each other (it doesn’t include a networking component even!) still other distributions exists and it is also possible to build new one.

Kubernetes is a platform for automating the orchestration of containers, enabling applications to run at scale on a myriad of platforms, that could contain a mix of processor architectures and operating systems at the discretion of the implementor.

Hardware

Nodes

A node is the smallest unit of computing hardware in Kubernetes. It is a representation of a single machine in your cluster. In most production systems, a node will likely be either a physical machine in a datacenter, or virtual machine hosted on a cloud provider like Google Cloud Platform. Don’t let conventions limit you, however; in theory, you can make a node out of almost anything.

Thinking of a machine as a “node” allows us to insert a layer of abstraction. Now, instead of worrying about the unique characteristics of any individual machine, we can instead simply view each machine as a set of CPU and RAM resources that can be utilized. In this way, any machine can substitute any other machine in a Kubernetes cluster.

cluster

In Kubernetes, nodes pool together their resources to form a more powerful machine. When you deploy programs onto the cluster, it intelligently handles distributing work to the individual nodes for you. If any nodes are added or removed, the cluster will shift around work as necessary. It shouldn’t matter to the program, or the programmer, which individual machines are actually running the code.

Persistent Volumes.

If a program tries to save data to a file for later, but is then relocated onto a new node, the file will no longer be where the program expects it to be. For this reason, the traditional local storage associated to each node is treated as a temporary cache to hold programs, but any data saved locally can not be expected to persist.

To store data permanently, Kubernetes uses Persistent Volumes. While the CPU and RAM resources of all nodes are effectively pooled and managed by the cluster, persistent file storage is not. Instead, local or cloud drives can be attached to the cluster as a Persistent Volume. This can be thought of as plugging an external hard drive in to the cluster. Persistent Volumes provide a file system that can be mounted to the cluster, without being associated with any particular node.

Software

Containers

A container is a stand-alone, executable piece of software which includes everything that is required to run it. Such as code, libraries and any external dependencies. It ensures that what is running is identical, even when running in a different environment. This is achieved by isolating the running code from its execution context.

This is achieved in Linux by carving a subset of the Linux Kernel using an API called cgroups. This provides a high amount of isolation from the operating system, but without the runtime performance hit of virtualised environments such as VMWare, KVM, etc.

Pod

The pod is the most basic of objects that can be found in Kubernetes.

A pod is a collection of containers, with shared storage and network, and a specification on how to run them. Each pod is allocated its own IP address. Containers within a pod share this IP address, port space and can find each other via localhost.

A pod should be seen as an ephemeral primitive.

Deployments

Although pods are the basic unit of computation in Kubernetes, they are not typically directly launched on a cluster. Instead, pods are usually managed by one more layer of abstraction: the deployment.

A deployment’s primary purpose is to declare how many replicas of a pod should be running at a time. When a deployment is added to the cluster, it will automatically spin up the requested number of pods, and then monitor them. If a pod dies, the deployment will automatically re-create it.

Using a deployment, you don’t have to deal with pods manually. You can just declare the desired state of the system, and it will be managed for you automatically.

Ingress

Using the concepts described above, you can create a cluster of nodes, and launch deployments of pods onto the cluster. There is one last problem to solve, however: allowing external traffic to your application.

By default, Kubernetes provides isolation between pods and the outside world. If you want to communicate with a service running in a pod, you have to open up a channel for communication. This is referred to as ingress.

There are multiple ways to add ingress to your cluster. The most common ways are by adding either an Ingress controller, or a LoadBalancer. The exact tradeoffs between these two options are out of scope for this post, but you must be aware that ingress is something you need to handle before you can experiment with Kubernetes.

Implementing an API by Creating a GKE cluster on GCP

Introduction to API

This API will have two endpoints which print the name of the state in U.S if given parameter in the URL is the abbreviation of the particular state, for an example,

http://host/codeToState?code=AL, if the abbreviation “AL” the API should return the value “Alabama”.

Second endpoint is to provide a functionality other way around which is,

http://host/stateToCode?state= Alabama then it should return “AL” as the abbreviation. It’s simple as that.

Pre-requisites:
- You already have a Google Account and have access to Google Cloud Console (https://console.cloud.google.com)
- Billing account enabled that can be use in this project

Process

Step 1: Create a Google Cloud Project OR select and existing Project
I am going to create a new project, however i can select and existing project as far as have a valid billing account available. The following screenshot depicts the creation of a new project.

Create a new Google Cloud Project
After Creating the new Project — Home Dashboard

Step 2: Create a Kubernetes Cluster

Now that i have a project created, i can head to creating a Cluster of my choice. To do this select Kubernetes Engine menu item from the left side navigation. It make take a minute or two to setup the Kubernetes Engine for the first time. The following screenshot depicts the home screen for Kubernetes Engine.

Kubernetes Engine Home Screen
Adding Cluster Info

After one to two minutes you should see your cluster is created and ready to use.

ready to use cluster

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade