Kubernetes — Orchestration Guide [ A Lovely Symphony 6]

Covenant Chukwudi
MyCloudSeries
Published in
5 min readApr 30, 2019
Kubernetes Orchestration Guide — Image Credits Jaxcenter

This is part 6 of our Kubernetes Orchestration Guide series. If you came here directly, you might want to start from the first article to get a much better understanding.

In our last article, we did an introduction to Deployments. We explained the limitations of creating Kubernetes Pod objects directly especially when it comes to making configuration updates and why creating Kubernetes Deployment object is a much better choice to make.

We also discovered an amazing feature of Kubernetes Deployment object in its ability to automatically spawn up a new pod to match your replica specification in the event of a Pod crashing or having any issues. Pretty sweet 😊.

In this series, we will continue our K8s (shortened name for Kubernetes) Journey and learn a whole lot more.

Scaling up our Deployment

In a production environment, we might experience a surge in traffic which typically happens as a Business grow. Kubernetes makes it very easy for us to scale our deployment to cater to such business needs.

Scaling our deployment is as easy as defining the number of “replicas” we want in our deployment config file.

Using our ongoing example of ourapp project we can easily scale our deployment to three (pods) by adjusting the value of our replica attribute in ourapp-deployment.yaml config file.

apiVersion: apps/v1
kind: Deployment
metadata:
name: ourapp-deployment
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: ourapp
image: covenant/dockerize-nodejs-image
ports:
- containerPort: 3000

Next, we execute our kubectl apply -f the command to apply the new configuration:

$ kubectl apply -f ourapp-deployment.yaml

Next, we execute kubectl get deployments to see the current status of our deployment:

$ kubectl get deployments ourapp-deployment

You would see that the desired column on our returned result now reads 3:

NAME                DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
ourapp-deployment 3 3 3 2 1d

Yours might take some time before the current, up-to-date and available also reads 3 as Kubernetes has interpreted our config file and is currently creating 2 extra pods to make our total pod count be 3.

You might notice that my own “available” column still reads 2 which means the third pod is still starting up

Try executing kubectl get pods:

$ kubectl get pods

You would see the newly created pods that came up as a result of your scale operation.

hello-minikube-5857d96c67-s8mdg      1/1       Running   6          85d
ourapp-deployment-588684d477-67xvc 1/1 Running 0 5m
ourapp-deployment-588684d477-fmz2h 1/1 Running 1 1d
ourapp-deployment-588684d477-nbxvd 1/1 Running 0 5m

Scaling down our Deployment

Let’s try scaling down, now that we’ve seen how we can scale up our deployment.

To scale-down to one pod is pretty much the same process. First, we modify the replica field in ourapp-deployment.config file to read-only 1:

apiVersion: apps/v1
kind: Deployment
metadata:
name: ourapp-deployment
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: ourapp
image: covenant/dockerize-nodejs-image
ports:
- containerPort: 3000

Next, we apply our configuration changes:

$ kubectl apply -f ourapp-deployment.yaml

You should see a message similar to the one below telling you that your changes were applied successfully:

deployment.apps "ourapp-deployment" configured

If you type really fast enough and execute:

$ kubectl get pods

You should see 2 of your ourapp-deployment-<randomfigure> pods having a state of terminating.

hello-minikube-5857d96c67-s8mdg      1/1       Running       6          85d
ourapp-deployment-588684d477-67xvc 1/1 Terminating 0 14m
ourapp-deployment-588684d477-fmz2h 1/1 Running 1 1d
ourapp-deployment-588684d477-nbxvd 1/1 Terminating 0 14m

You can also execute kubectl get deployments to see your deployment status

$ kubectl get deployments ourapp-deployment

You should see something similar to this:

NAME                DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
ourapp-deployment 1 1 1 1 1d

A Tricky Problem

Feel free to skip this section if you want, though it can be really really helpful in building stable deployments.

This section is assuming you started following through from our Docker Series. If you didn’t, skip through to the “it didn’t work as expected” section.

Let’s try something out, let’s start by modifying our code-base views/index.js

<html>
<head>
<title>Docker Nodejs Demo</title>
<link rel="stylesheet" href="css/index.css">
<link href="https://fonts.googleapis.com/css?family=Raleway:200" rel="stylesheet"></head><body><span class="main-text">Your NodeJs Version 2 App is Up And Running</span></body></html>

Next, we build our updated code-base into a docker image by running the following command (if you were following through from the Docker Series, you would remember that you are to replace “covenant” with your docker username):

docker image build -t covenant/dockerize-nodejs-image . -f Dockerfile

Next, we push our updated image to docker-hub.

docker push covenant/dockerize-nodejs-image

It didn’t work as expected

Now that we’ve done this, try accessing re-execute kubectl apply -f on your deployment config file.

$ kubectl apply -f ourapp-deployment.yaml

You would notice something strange happened, we see this response:

deployment.apps "ourapp-deployment" unchanged

This response at first glance is really weird yet accurate, we do know that we have updated our image on docker-hub yet here-in lies the issue, Kubernetes will not “apply” a configuration file if there is no change.

We can also confirm that our new image updates haven’t been applied yet by visiting our minikube IP. We see that the text still reads:

Your NodeJs App is Up And Running

This is a well-known issue in Kubernetes that in-fact has a ton of conversations on GitHub about it.

How do we solve it?

One of the only ways currently available to solve this particular issue is to use the concept of docker tags and imperatively force-update the Kubernetes deployment.

First, we tag our local image with anything. We would use a random version number in our case:

docker image build -t covenant/dockerize-nodejs-image:v1 . -f Dockerfile

Next, we push our tagged image to our Docker Repository:

docker push covenant/dockerize-nodejs-image:v1

Now that we have a tagged docker image, we run this command to force an image update:

kubectl set image deployment/ourapp-deployment ourapp="covenant/dockerize-nodejs-image:v1"

What the above command does is highlighted in the following steps:

  1. It finds a deployment in the current Kubernetes cluster that has the name “ourapp-deployment”
  2. Next, it looks for a container template called “ourapp”.
  3. Next, it updates (set) the image attribute to the specified image name: covenant/dockerize-nodejs-image:v1

Now that we have done this, Kubernetes recreates the pods with the updated image. Accessing our app through the minikube IP now gives the updated content:

Your NodeJs Version 2 App is Up And Running

Sticky Notes

  1. Scaling a Kubernetes deployment is made so easy by simply specifying the number of pods you desire in the replica field of your deployment config file.
  2. Scaling down also works the same way by setting the desired amount in your replica field in which case Kubernetes selectively terminate pods till it gets to your desired amount.
  3. Triggering an update on Kubernetes when an image with exactly the same name is updated on docker-hub is currently a dicey problem which can be worked around by using docker tags.

MyCloudSeries is a training and consulting firm with expertise in Cloud Computing and DevOps. We assist organizations in their DevOps strategies, transformation, and implementation. We also provide Cloud Computing Support contact us at www.mycloudseries.com

--

--

Covenant Chukwudi
MyCloudSeries

I build products that would have positive effect on lives