Using Kubernetes

Managing the containerised microservices through Kubernetes.

Deepak D
deechris27
17 min readFeb 7, 2019

--

Kubernetes Overview:

Kubernetes is an open source orchestration system developed by Google for managing containerised applications across multiple hosts. Its written in GO lang, which is now maintained by CNCF (Cloud Native Computing Foundation).

Orchestra(tion):

The Master:

With the above picture, we can imagine how a master would manage his musicians to keep the tempo, rhythm, tune etc in sync. Solo musicians needs to be signalled to have them play at the right time to compose and deliver the music as intended.

Just like Concert Master managing all kind of musicians, Our Kubernetes master can manage worker nodes/slaves of any environment.

Before we get into details, lets try to answer few questions that came into my mind when I started learning Kubernetes.

We have cloud giants like Amazon web services, Microsoft Azure, Google Cloud etc. What does this Kubernetes do, that’s not offered by the cloud service providers?

What intrigued me more was, all the cloud giants rolled out Kubernetes products/services in the recent past ex: AKS (Amazon Kubernetes Service), Azure Kubernetes Service, Google Kubernetes Engine etc. But why??? I researched and found so many jargon filled explanations that answered my question on a very technical standpoint.

Note: I was told “AKS is a copyright of Microsoft for Azure kubernetes service”. Unsure yet.

Very practical reason and purpose came into my understanding after relating it to a real time incident along with the technical explanations.

Well, I’d known a startup where the management decided to switch to other cloud service provider for some reason(s). Whatever the reason it may be, what about the cost of migration? The answer for so many ‘Whats’ depend on how portable or independent your applications are.

Reasons to choose Kubernetes over cloud service providers:

  • Kubernetes can be deployed on-premises, private clouds, and public clouds whereas ECS can only be deployed in Amazon.
  • Multiple storage options, including on-premises SANs and public clouds. On AWS, limitation comes to storage within Amazon; including Amazon EBS(Elastic block storage).
  • Kubernetes can be deployed anywhere. AWS ECS is not publicly available for deployment outside Amazon.
  • Kubernetes is open source with over 1200 contributors. Amazon ECS is not publicly available.
  • The Key thing is portability. Applications deployed via Kubernetes can be ported to other platforms/providers as and when needed.

Amazon being the largest cloud service provider in the market currently, has its stake way behind Kubernetes in container orchestration.

Amazon’s ECS:

Amazon’s Elastic container service was the first container-based runtime product. ECS acts as a runtime that orchestrates containers on top of EC2 instances in your account.

Amazon ECS

Kubernetes Architecture:

***********Let’s dive into Kubernetes***********

In the above Kubernetes Architecture Diagram, we see that, we can interact with the Kubernetes master through UI interface and CLI (kubectl).

Let’s assume, we have an application that has “Login app”, “Database”, ”Payment services”, “People search”, “Weather forecast”, “Simple chat bot” etc. The below diagram depicts a cluster that has nodes with pods wrapping different app containers.

The Kubernetes Master interacts with the nodes through an API Server. If we need to interact with the Master, we can do so, through this API.

Let me define and explain all the components of Kubernetes(k8s) after which we’ll do some hands on.

The components of Kubernetes:

A pod is a wrapper around a container where Kubernetes interacts with the pods and not the container directly. In the architecture diagram, we see pods with multiple containers in it. But in the real world, the most common use-case is, a single pod will run a single container. That’s because pods are used as the replication unit in Kubernetes. When the load or traffic increases, Kubernetes can be configured to deploy new replicas of our pod to the cluster as needed. It is a standard to have multiple copies of a pod running at any time in a production system to allow load balancing and resist failure.

Kubernetes Isolates Pods from the outside world. They are not visible outside the cluster. Pods have short life time, they are regularly killed/created as and when needed. Pods are ephemeral. It is recommended to create pods only through controllers and not directly.

We can connect to pods in Kubernetes cluster from outside(ex: Browser) by the help of a service.

Services expose pod

Services give a networking endpoint to the pods. Services are one of the load balancers that routes traffic. We specify in a service which port the traffic can be accepted from. A pod will have a label (ex: “payment app”), the services has selector (ex: “payment app”). So now, this service can be used to get access to the pod by the label ‘payment app’.

The IP addresses of the services are dynamic. When you restart the cluster, Kubernetes assigns a new IP address to the service. To enable communication between one container in a pod to another container in another pod, Kubernetes maintains a database “Kube-dns” service. Kube-dns holds key value pairs, where the key is the name of the service and the value is the IP address. Whichever container/pod/service is trying to connect to other container/pod/service, they do a lookup in the kube-dns to establish a connection.

Our services can be made as two other types apart from load balancing.

  • ClusterIP: The service will be accessible only inside the Kubernetes cluster. It will not be accessible to web browsers. We use this if we want other pods only from within the cluster to access this pod. Ex: “Payment Details DB” microservice can only be accessed by “Payment App” microservice.
  • NodePort: The service will be accessible to the outside world. Only in this type of service we can specify the port that we want to expose to the outside world. Kubernetes allows us to expose any port number above 30000. In the production environment, The cloud provider decides the port to expose.

If we are not deploying into the cloud or if our requirement is to expose, say, port 80 or any specific port, then we go for Ingress service.

As we saw earlier “pods are ephemeral, short lived”. If our pods crash, we need a backup to spin up/back up. The answer is ReplicaSet.

ReplicaSet is a configuration/specification that ensures the specified number of pod replicas/clones are running at any given time.

Deployment is a sophisticated form of ReplicaSet, where we specify/configure the number of replica sets to be running for a pod. Rolling updates with zero downtime. When a deployment is added to the cluster, it will automatically spin up the requested number of pods, and then monitor them. If a pod dies, the deployment will automatically re-create it. They provide updates and rollbacks of pods.

A container (ex: Docker) basically isolates your application from being dependant on specific hosts/environment. You could run your application built inside Linux in a Windows or Mac environment without any issues. Again, the key thing is portability.

I found this blog very useful to understand Docker better. I will show you the steps to do with Docker in the hands on later below.

A Node is a worker/slave in Kubernetes and can be either a virtual or a physical machine, depending on the cluster. A node can have any number of pods. The services on a node include the container runtime, kubelet and kube-proxy.

Container Runtime does container management like pulling images, starting/stopping containers etc.

Kubelet is an agent which registers nodes in a cluster. If pod fails, it reports to master and master decides what to do. It watches and responds to api-server. Exposes port 10255 on node.

Kube-Proxy As the name says, it is responsible for networking, kube-proxy provides unique IP address to Pods, All container in a pod share same IP address, does Load balancing across all pods in a service.

The Master Node

They are the main controlling unit. Their components include:

API-Server is the only way to talk to Cluster Store.

etcd(Cluster Store) stores configuration data(key value pairs) which can be accessed by the Kubernetes master API Server.

Scheduler Watches Api-server for new pods and assign work to node. They place the workload on the appropriate node.

Controller A daemon that watches the state of the cluster to maintain desired state. Example are replication-controller, namespace-controller etc. It also performs garbage collection of pods, nodes, events etc. They ensure that the cluster’s desired state matches the current state by scaling workloads.

Example flow:

  • Kubernetes CLI (kubectl) or UI interacts/writes/reads API Server
  • API Server gets the interaction/request, validates and logs it to Cluster store(etcd)
  • Cluster store (etcd) responds back to the API Server
  • API Server invokes the Scheduler as per the request.
  • Scheduler finds out where to run the pod and gives that info to the API Server.
  • Every request, response activity of API gets logged into cluster store
  • Again API server logs that info returned by scheduler into etcd.
  • etcd notifies back the API Server.
  • API server invokes the Kubelet in the corresponding node.
  • Kubelet interacts with the container daemon through the API via the container socket to create the container.
  • Kubelet updates the pod status to the API Server.
  • API Server logs the new state/status into the cluster store (etcd).

Let’s do some Hands On

Step-1: Let’s Install Docker

Create an account or login to Docker Hub.

Goto official Docker download page and download Docker Desktop as per your operating system.
Note: For MAC, Apple Mac OS version has to be Sierra 10.12 or above.
For Windows, requires Microsoft Windows 10 Professional or Enterprise 64-bit

I’m using Windows 7 Home, and I couldn’t meet the requirements for Docker desktop. So, I’m going to install and use Docker ToolBox for the demo. Installer available for both MAC and Windows.

After downloading and installing the Docker Toolbox successfully. You should be able to notice the environment variables updated.

Docker environment variables after installation

I have “Cygwin” terminal installed, we can use Git Bash or Powershell as well.
If you’re in Linux or Mac, no worries. Otherwise, you could install Ubuntu from Microsoft Store after enabling Windows sub system for Linux. By doing so, you could use the Linux terminal in your Windows. (Terminal option is your choice). Any terminal is fine to follow along.

The above picture shows how to enable Windows subsystem for Linux. This will require a restart. After restarting, you should be able to see bash terminal in your system.

Bash terminal in the windows start menu

This terminal will need a host operating system. So install Ubuntu for free from Microsoft store. After installation, set a username and password to start using Linux terminal.

Free to use Ubuntu from Microsoft Store

We can use Docker quick-start terminal/Git-Bash/Cygwin/Powershell terminal. You could use any of these terminal of your choice.

Let’s do “docker -v” in the terminal to get the docker version installed or simply “docker” to get all the available commands.

You might observe that docker commands in our terminal responds slow. This is because docker commands interact with the docker daemon in the VM.

docker info” should get you the details of all the running containers, Images etc. For this command and other upcoming docker commands to work, ensure that Linux VM is running in your Oracle VM/Docker Toolbox.

Docker commands expect a docker daemon thats built for Linux environment. So we need the VM in Windows for our docker commands to work.

docker info

Let’s create a simple index.html file with a sample content and put that in a docker container. I’m using Microsoft Visual Studio Code.

index.html

Now, let’s create a Dockerfile where we can mention the steps to create a docker image . Click on New File and name it ‘Dockerfile’. No extension needed.

Dockerfile

In the Dockerfile, paste the below

FROM nginx
COPY
. /usr/share/nginx/html

For Apache server, we could use httpd instead of nginx. COPY is used to specify the path into which we want to copy the files to create Docker image.

When you do a “ls” in the bash. You could see all the folders.

Now that we have our index.html file and Dockerfile ready. Let’s build the image and run the container to view our web-page.

Do “docker build -t deepakswebpage-image:v1 .” to create the image. The dot at the end is needed, otherwise you’ll get “build requires 1 argument” error. It’s because we’re using Dockerfile in the local directory.

Docker Build

docker image ls” should list all your image files

Next, Let’s run the container in a port of our choice.

docker container run -d -p 3030:80 deepakswebpage-image:v1

docker run

Now we should be able to see our web-page in the web browser by going to
http://192.168.99.100:1010/. 1010 is the port we chose, it can be anything. 192.168.99.100 is the default IP address for Docker Toolbox/Oracle VM. If you’re in Docker Desktop, you can access this page via http://localhost:1010.

Web-page with our sample content

Let’s push this container image to docker hub repository. Login to your docker hub account via terminal. “docker login

docker login

Create a repository either through UI by going to docker hub or you could create via terminal as well.

I named my repo as demo-repo, we could see the option to link our github account. By linking github, auto-build triggers a new docker build for every git push.

docker tag deepakpage-webpage:v1 dd43028/demo-repo:release0” can be used to create a repo.

docker push dd43028/demo-repo:release0” to push our container image into the repository. “release0” is just for versioning our images. Optional but a good practice.

docker push

In our Windows environment, we installed Docker Toolbox/Oracle VM for our docker commands to work. Because our docker interacts with the docker daemon in the Oracle VM. Since we’re going to put this container into Kubernetes. We need to configure docker to interact with the docker daemon in the Kubernetes environment.

Let’s download, configure, setup Kubernetes in the Windows machine.

Download the kubectl and minikube-windows-amd from the Kubernetes downloads page. The same links have steps/instructions for other operating systems as well.

Rename both kubectl and minikube-windows-amd to kubectl.exe and minikube.exe. Place them in a folder and copy the location path. Set the path in the system/environment variables.

Environment Variables

After doing the above steps. You could check if they’re set by doing a “kubectl version” and “minikube” in your terminal.

So, if you’re all set. Let’s do “minikube start” in the terminal. This will do few things like, start Kubernetes locally, start a VM, get an IP for VM, configure kubectl etc.

→ After succesfull completion of minikube start. Do “kubectl version”. This will show you the client and server versions.

→ “minikube ip” should get you the IP address

We can see the IP address same as that of Oracle VM. So now, we need to configure docker to interact with the Kubernetes environment.

→ Run “minikube docker-env” in your terminal.

Now, You could either copy and paste all those export commands and run or just the “eval $(minikube docker-env)

Then type “echo $DOCKER_HOST

We can see the IP address and the port.

Now do “docker image ls” to see the default images added by Kubernetes

Let’s create a pod to wrap our container image. Create a yaml file and write the content from the image below. Change the image value with your repository name and docker-image.

Yaml for pod

Do “kubectl apply -f my-pod.yaml” to create the pod. ‘-f ‘ stands for filename.

We use YAML (Yaml ain’t a markup language) to create the components of Kubernetes.

“Indentations are important in a YAML file. Avoid using Tab key to create the indents”.

We now have the pod, as I’d explained above; pods are not accessible outside the Kubernetes cluster. If you try to access http://192.168.99.100 (The minikube ip); you’ll get “The site can’t be reached”.

kubectl describe pod simpleapp” command will get us all the information about the pod that we created.

Pod Describe Event Section

In the above image of the pod describe. Under the ‘Events’, we can see that; our docker image (dd43028/demo-repo:release0) was successfully pulled.
But we’re not able to access the pod from a browser. As explained in the above sections about pod, to access the web-page inside the docker container wrapped by our pod; we need to create a Service.

Before creating a service, let’s look at the directory listing of the container in the pod. “kubectl exec simpleapp ls

Directory Listing

We can notice the same directory listing that we saw in docker terminal. Let’s create a service and access our web-page.

kubectl apply -f my-service.yaml

Service creation

We have one more step to do, to access our web-page inside pod via this service. That is, specifying the label for our pod. Go to my-pod.yaml we created and add labels under metadata.

Make sure to have the same key-value pair for labels in pod yaml file and selector in the service yaml file. This is how we create link between our service and the pod. Apply the new change made.

We’re all set to view our web-page via port 30020 and the IP 192.168.99.100.
Let’s visit http://192.168.99.100:30020.

Yay! we’re now viewing the web-page that’s in the docker container wrapped by the pod inside Kubernetes cluster exposed through services. Ignore the 101 instead of 100 at the end in my IP, I had done this before, I think 100 is taken.

kubectl get all” command lists us everything that we’ve defined in our Kubernetes cluster.

kubectl get all

In the above image, the service/kubernetes is a REST API exposed by the Kubernetes Cluster. Whenever we use kubectl command, it makes a posts request to the REST API.

minikube status” would give you the status of the host, kubelet, apiserver.

→ “kubectl delete pod my-pod.yaml” can be used to delete the pod.

→ “kubectl delete service my-service.yaml” can be used to delete the service.

All Kubernetes commands can be found here

What if our container/pod crashed? We need to have back up, load balancing, high availability. Our application has to be down-time resistant. Let’s create a ReplicaSet that would spin up the backup image of our application when the pod crashes.

The same docker image we pushed earlier to the repository. I changed the release0 to release1 and pushed again.

Create a Yaml file for replica set. I named it my-replica.yaml. Copy the contents from the below image into your Yaml file.

kubectl apply -f my-replica.yaml” to create our replica set.

Let’s delete all pods created previously, so that we can see the pod created by our replica-set. The pod created by replica set will have namespaces appended to the pod name that we specified. ex: simpleapp-y67re43.

Pod name with namespace created by replica set

In the above screen grab, we can see the pod name with random alphanumeric value appended. That’s the namespace.

Let’s describe the replica-set we created to see the details. “kubectl describe replicaset simpleapp

Let’s simulate a pod crashing and see our replica set in action. let’s delete the pod “kubectl delete pod simpleapp-44jn4”.

Now, do a “kubectl get all” to check for a new pod with a new namespace created automatically.

As mentioned earlier, the recommended way to do the above is through deployments. Deployments are sophisticated replica-set which ensures zero down time. Let’s create a deployment quickly to see them in action and their difference with respect to replica-set.

Let’s delete the replica-set created “kubectl delete replicaset simpleapp

Create a yaml file for deployment. I named it my-deployment.yaml. This yaml file is same as replica-set. Just change the ‘kind’ to deployment. I’m going for 2 replicas this time.

Now if we do “kubectl get all”,

We see that, our deployment has created a replica-set. Two new pods with namespaces longer than what replica-set creates. Deployment is an entity that manages the replica-set. In the my-deployment.yaml, unlike replica-set; we can give the image value like “image: dd43028/demo-repo:release0–3”.

By doing so, we have deployments taking care of our application through rolling deployments with different release version spinning up randomly through replica-set, ensuring zero down-time.

kubectl rollout status deployment simpleapp

So that concludes our “Using Kubernetes” story. I’ll see if I can add deployment of our K8s cluster to cloud and add the instructions here.

Other container orchestration tools/software in the market are:

Red Hat OpenShift
Docker Swarm
Apache Mesos
Helios
Shippable
Centurion
Marathon.

I hope you liked this post. Feel free to post your inputs/opinions/suggestions in the comments. Thanks, Deepak.

My YouTube Channel
Twitter
MyTidbit
Facebook

Copyright © 2019 Deepak D. All Rights Reserved.

--

--