Hello World!!! of Kubernetes [Part 1]

So, I started working with containers and Kubernetes some months back, was an absolute beginner to the field before that and thanks to my awesome colleagues at Infracloud I got know Kubernetes and containers better.

I would like share how i got started working with Docker and Kubernetes in the guide.

Courtesy: blog.opendigerati.com

When we tell someone who is new to Kubernetes that Kubernetes is a container orchestrator, the very first question that comes into their mind is what does orchestration actually mean? How can I create a container out of an application that I already have and how can I orchestrate that application using kubernetes.

We are going to look into all of these questions in this guide so that you will be able to understand why you should use Kubernetes and how to start with it. We will not look much into the definition of the terminologies, we will be more focused on actually doing the things.

Let’s take an example of a simple API written in GoLang and we will make a container (docker) for that API and finally will try to deploy that API (container) in the Kubernetes cluster (minikube) that we have.

To actually go through this guide and do the tasks you just need a Kubernetes cluster (minikube in our case) and docker running on your machine. I am running Kubernetes version v1.13.3, and docker version 18.09.2 on my machine.

1. Make an API

Below is the code of the API that we will try to make container of

If you just run the above code using go run main.go you will have an API listening on port 8080 and at the resource path /api/books. Make a get request to the URL localhost:8080/api/books and you will be able to get two book objects as in json format.

2. Make a Container of that API

Next step is to make a docker container for the API. Below docker file can be used to make a container of the above Go program

If you run docker build to make an image out of this Dockerfile using docker build -t restapi:1.0 . you will be able to see a docker image created on your machine.

Next step is to run the image and make a container out of this image so that we can check if we are able to access the API that is containerized.

docker run -d -p 8080:8080 restapi:1.0

So we are exposing 8080 port of host machine to 8080 port on the container because we can see in the main.go our API is listening at the port 8080. Now, If we try to access localhost:8080 we will be able to see the same output as we saw when we ran main.go (output of step 1).

So, now what we have is an API containerized using Docker and listening on host port 8080. Our goal now is to deploy this image restapi:1.0 onto Kubernetes.

Kubernetes

Kubernetes is all about decoupled transient services, it simply means it consists of some services that can be terminated and replaced at any point of time.

Courtesy: Linux Foundation

In the simplest form, Kubernetes is made up of a central master and some worker nodes. The master runs an APIServer, a Scheduler, various controllers and a storage system to keep the state of the cluster, container settings and network configuration.

Kubernetes exposes an API via the kube-apiserver. You can communicate with this API using a local client called kubectl or you can write your own client and use curl command.

The kube-scheduler is forwarded the requests for running containers coming to the API and it finds a suitable node (minions) to run that container on. In other words we can say that if a request to run a container comes to kube-apiserver this request is forwarded to the kube-scheduler and then it finds a suitable node to run that container on.

Each node in the cluster runs two processes: a kubelet and kube-proxy.

The kubelet receives requests to run the containers, manages any necessary resources and watches over them on the local node. Kubelet interacts with the local container engine which is docker by default, but it could be rkt or cri-o which is growing in popularity or any other.

The kube-proxy creates and manages networking rules to expose the container on the network.

Enough of the terminologies, let’s look up the services that are required to run our container that we have created in step 2.

PODs

So Kubernetes uses objects named PODs to serve your containerized images, and these objects in Kubernetes’ terms are called resources.

If you have created an image and image is present locally, you will have to load that image into minikube’s docker daemon because it uses its different docker daemon and Kubernetes will not be able to access the image if the image is not present in the minikube’s docker daemon. To do this you can use below command

restapi:1.0 should be replaced by your image name

Once you have the images ready in minikube’s docker daemon you can create PODs that will serve this image. Now the important part comes into picture, you can directly create a POD in Kubernetes for your restapi:1.0 but if due to certain reason your POD fails and is not running anymore you will have to manually create another POD so that the requests to restapi:1.0 can be served through the newly created POD. TO resolve this issue we can another resource called deployment.

Deployment

This is where orchestration part comes to rescue, you can create another Kubernetes resource named deployment and tell that deployment that I need three replicas of this image (restapi:1.0) running on my cluster. Now if anything bad happens, deployment resource is responsible to re-create another POD with that container. This is very trivial example of how Kubernetes provisions the containers.

There are several ways to create resources in Kubernetes, some of the popular ways are using the CLI or through the resource definitions. We will be using later one, so we will have resource definition in the yaml format and we can use kubectl create -f <filename> to create the resource that is defined in the filename. We will be creating all these resources in api-dev namespace, so if you see -n api-dev in a command that means we are just trying to run that command for api-dev namespace.

Create a file (let’s call it restapi-deployment.yaml) using below resource definition

and create the deployment resource in your Kubernetes cluster using kubectl create -f <filename>. One thing to note here is the label that we are giving to the PODs, I will explain later, why are we giving the label to the PODs.

Once the deployment is created successfully you will have three PODs running, and describing those PODs will let you know that these PODs are running restapi:1.0 image as we mentioned in the deployment and all these PODs will be having the label that we defined in the deployment resource definition.

Since all the PODs are running successfully we can verify if everything works fine by forwarding a host machine’s port to that POD using below command

Please make sure change the POD’s name

and then we can curl the url localhsot:8081/api/books to get all the books.

So, yes now we have an API running on Kubernetes and we checked using port forward that it is returning the correct output.

Now the next step would be to expose this API to the outside world so that we can access this from where ever we want. One option would be use port-forward (its so bad that I shouldn’t even mention it here), but we will have to run port forward for as many PODs are running for that image, and we will be associating different port each time, re-creating the POD will change its name the port that was mapped to it will not work anymore.

Services

To resolve the issue Kubernetes has the concept of services. So the issue that we had with port forward was that we will have to run the command for each POD that we have and PORT forward is dependent on the POD name and this will result in an issue if deployment re-created the POD due to certain reasons. Service groups the PODs, so you will not have to create a service for each POD that you have, and it groups them by the label so you don’t need to worry about re-creation of the PODs because if will have the same label every time the POD getting created.

So when we create the service we tell it that it has to group the PODs that have a specific label. These labels are being provided to the PODs by the deployment that we are creating, so the step where we create the deployment, we provide the labels these PODs should be created with and the service will group PODs with the label that are defined the service’s resource definition.

Create the service resource using below resource definition

So you can note here that service is *listening* on port 8081 and it will be forwarding the request to the PODs on port 8080.

Now that we have service created, we should check if the things are working perfectly till now. Use port forward command to map the host’s port to the service’s port and you should be able to see the correct output of the API.

8080 is host machine’s port and 8081 is k8s service’s port

If you just curl localhost:8080/api/books you will be able to see the same output that you saw in previous steps. But as you can see to actually access the API we are stil port forwarding, and the API is not available outside of the cluster but it is, inside the cluster.

To check if API is accessible inside the cluster you will have to get a shell to one of the PODs that are running and curl the service. Go get the pods

listing all the PODs

To execute any command from inside the POD, get the shell to the POD using below command

as you can see you inside the POD now

To, curl the service from inside the POD, you will just have to run curl <servicename>:<port>/path. In our case service name is restapi-service, port where the service is exposed is 8081 (see the resource definition) and the path is /api/books.

So, as you can see we are able to access the API from inside the cluster but not from the outside of the cluster.

Ingress

Now we have our running API and we are able to access it from inside the Kubernetes cluster, but to access this resource (service) from outside the cluster we will have to leverage the power of the ingress resources of Kubernetes.

If we see the image that we have in Kubernetes office docs

     internet
|
[ Ingress ]
--|-----|--
[ Services ]

As we can see ingress enables us to access the services from outside of the Kubernetes cluster or from the internet.

Usually ingresses only work if we have an ingress controller installed but in the case of minikube you don’t actually need an ingress controller. You just need to enable the ingress addon on the minikube using below command

Ok, so once we have addon enabled we can create the ingress resource and define the rule to which this ingress should respond to and which service the requests will be served with.

Create a file name restapi-ing.yaml with below resource definition

and create the ingress resource using below command

all the resource mentioned above can be created using this command, api-dev is namespaces we are creating all the resources in.

You can clearly see that this ingress will be forwarding any request at the path /api/books to the service restapi-service on the port 8081.

If you try to access the API using localhost:8080/api/books, you will not be getting any response, to access the API from outside of the cluster you will have to use the minikube’s host IP, you can get the minikube’s host IP using command minikube ip and make a get request to minikube_ip/api/books.

I am planning to write another part of this guide, that will include PersistentVolume, PersistentVolumeClaim, StatefulSet and DaemonSet and some other services.

I would really appreciate if you have any suggestion or improvement to this guide, I will be happy to update it.