create your first Kubernetes cluster with CONFIDENCE
I am fluent with docker, docker swarm and around the same time Kubernetes(K8) was starting to take over docker swarm(worth looking). I always wanted to learn K8 however I always postponed it. Finally, I got time around this XMAS to read about it and thought why not create an application while my mind is fresh and so others and myself can benefit over it.
Topic:
Today we are going to create a React JS app which gets a random number by calling an Express JS app. We are going to orchestrate the whole process by using Kubernetes. We will first dockerize our front end/back end apps. Then using k8 we will deploy the pods [front end app (React)/back end app (Express)] and access them via k8 services.
The below diagram depicts what we are going to achieve today.
We will create a K8 Cluster, Services/Deployments for front end/backend.
Prerequisites:
- Docker for Desktop (latest version)
- Kubernetes or Docker Kubernetes.
- Node & NPM. (Only if you want to run applications stand alone)
- YAML.
This course requires basic/working knowledge on Docker, Node & NPM. Let’s dive in without any further due.
Kubernetes:
What is K8 ?
Why K8?
- Service discovery and load balancing
- Storage orchestration
- Automated rollouts and rollbacks
- Automatic bin packing
- Self-healing
- Secret and configuration management
When to use K8?
well, you can apply the aforementioned concepts(why k8) to most of the applications. However, it’s best to weigh the options K8 Cluster vs stand alone application and decide which one to use.
Source Code:
Please download the project. The project has 2 sub folders in it.
• Client -> React based app.
• Server -> Express based app.
Please follow the below steps to setup the project and start K8 cluster:
Install:
If you have docker for desktop. Go to preferences and go to the tab Kuberenets and click Enable Kubernetes. It takes a while to spin up Kuberenets on to your machine. So go make a coffee while it does the magic.
To verify if K8 is running. Type the below two commands:
kubectl versionOutputs:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:02:12Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}kubectl cluster-infoOutputs:
Kubernetes master is running at https://kubernetes.docker.internal:6443KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Client:
• cd client
• npm install
• npm run build
• docker build -t frontend:1.0 .
• kubectl apply -f frontend.deploy.yml
• kubectl apply -f frontend.service.yml
Backend:
• cd server
• npm install
• docker build -t backend:1.0 .
• kubectl apply -f backend.deploy.yml
• kubectl apply -f backend.service.yml
Go to browser:
type localhost
and hit enter you should see the application loaded.
Dissect:
PODS:
Pods are the smallest deployable units of computing that can be created and managed in Kubernetes.
kubectl run nginx-frontend --image=frontend:1.0
The above creates a pod which hosts the front end container. You can’t access it yet since a host port isn’t exposed for the container. We will uncover those later.
A pod can host multiple containers as well. Instead of creating via command line lets create via a yaml file.
# To create a pod with multiple containers kubectl apply -f app.pod.yml# To see container status kubectl get pod/mymulticontainerapp Outputs: NAME READY STATUS RESTARTS AGE
mymulticontainerapp 2/2 Running 0 9m# To see more details about the container
kubectl describe pod/mymulticontainerapp
To access the container from the host machine. You have to do a port forwarding i.e attaching host port to container port. You can do that via below command.
kubectl port-forward pod/mymulticontainerapp 9999:80 3000:3000Outputs:Forwarding from 127.0.0.1:9999 -> 80
Forwarding from [::1]:9999 -> 80
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
The above command expose the host port 9999 to container port 80 and 3000 to container port 3000 which is what our front-end/back-end containers are listening on. Any request to host 9999 will be forwarded to container on port 80. Any request to the host 3000 will be forwarded to container on port 3000.
Go to browser and open 127.0.0.1:9999 or localhost:9999 it should load the front end app.
To delete a pod you can do with below command:
k delete pod/mymulticontainerapp
To inspect a pod. You can use the below command:
Front End:kubectl exec mymulticontainerapp -c myfrontendapp -it /bin/shls /usr/share/nginx/htmlexitBack End:kubectl exec mymulticontainerapp -c mybackendapp -it /bin/shlsexit
The above commands opens a shell inside a container to interact.
Before we jump on to the next topic deployments it’s crucial that you understand metadata in yaml file.
MetaData is data about the the containers. You can add labels and key-values. You can use the labels or key-values as selectors to identify a pod which will be used by deployments/services later.
lets get all pods which has label as mymulticontainerapp
kubectl get pods --selector=name=mymulticontainerapp
Deployments:
You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
It should look pretty similar to the previous yaml you saw. The salient points to note here is kind it’s of type Deployment rather than Pod.
Selector : It is used to identify existing pods with such labels as their meta data. If there is a pod already running with such label it will be part of this deployment.
Replicas: It specifies how many pods with this container do you wish to create. If you say 2. It will have 2 pods running the back end.
Let say if replicas is set to 2 and there are 4 pods already running with the label app: node-backend
.Then it will terminate 2 of those to meet the requirement of 2 replicas. It would say scale up pods by 2 if replicas is set to 6.
Spec: It specifies the details about the container. Image name, container port, cpu/memory limits for the container inside the pod.
Template: It specifies the tags to be used for the newly created pods via spec.
To create a deployment run the below command
kubectl apply -f backend.deploy.ymlOutput:
deployment.apps/node-backend created
To see all the deployments:
k get deploymentsOutput:NAME READY UP-TO-DATE AVAILABLE AGE
node-backend 2/2 2 2 46s
To access the containers created deployment you can use port-forward.
kubectl port-forward deployment/node-backend 3000:3000
You can access by going to localhost:3000/random
Services:
An abstract way to expose an application running on a set of Pods as a network service.
Things to notice here. Kind is Service and the type is LoadBalancer.
It creates a LoadBalancer on host port 3000 and proxies the request to container on 3000.
What is LoadBalancer balancing here? If you look closely we specified a selector. The selector searches for pods with label app: node-background
and any request send to host on 3000 will be load balances among those pods. Here we have 2 replicas and the request will be load balanced to these 2 pods.
To create a service run the below command:
kubectl apply -f backend.service.yml.
To see all the services run the below command:
k get service
It tells the service name and what type of service it is. Here if you see there a service called backend and of type LoadBalancer which we just created using the service yml.
You can do the same steps for frontend
kubectl apply -f backend.deploy.yml
•kubectl apply -f backend.service.yml
Isn’t it easy. Hope this helps understanding the basics of K8. We just touched a drop of the vast ocean(K8). I hope this paves the path for your K8 learning.