Blue Green Deployment using Kubernetes

fazley kholil
6 min readNov 18, 2018

--

What is Kubernetes

Kubernetes is a platform for managing application containers across multiple hosts. It provides lots of management features for container-oriented applications, such as auto scaling, rolling deployment, compute resource, and volume management. Same as the nature of containers, it’s designed to run anywhere, so we’re able to run it on a bare metal, in our data center, on the public cloud, or even hybrid cloud.

Why using kubernetes

In the world of microservices, the first problem you are going to get is that you are going to get more and more components and services in your application that you will have to manage so that they can work together.

As the number of microservices grows, you will have many running instances of them and also each microservice can have multiple instances.

So how are you going to deal with this? May be SSH on all machines manually to perform the deployment? If one of the deployment goes wrong, you will need to find the machine ip, then ssh back to that machine and re deploy the packages. This will probably take you a whole day just to perform a deployment.

Therefore we need a layer that orchestrates those containers for us. Talking about container orchestration, we cannot deny the fact that Kubernetes is the right guy.

Kubernetes essentials

The atomic unit of deployment in the VmWare world is a vm, in Docker we have containers, however in Kubernetes we have what we call a Pod.

Pod is the smallest deployable unit in Kubernetes. It can contain one or more containers. Most of the time, we just need one container per pod.

Pod can exist only on one node. As you can see a Pod contains some namespaces, network stacks and much more…

We usually deploy pods via a deployment.

Lets try it out

Pre requisite

You must have a running cluster of Kubernetes. Refer to this tutorial to learn how to build a Kubernetes cluster.

Having the cluster up and running, I need to deploy an application, for example a PaymentServiceProvider app which process payments through different service providers.

First of all I will need to Dockerize that application and push it image to the docker registry for Kubernetes to be able to pull it. This image is available at docker hub.

We then create a manifest file and name it demo.yml for the sake of this demo.

As you can see we have specified the number of replicas 3.

Deployment uses ReplicaSets to ensure the number of replicas is always running.

To perform the deployment, just do a kubectl create.

kubectl create -f demo.yml

How this works?

To deploy an application with Kubernetes , I just send manifest file in yml format with the container image and how many instances I want to the Kubernetes master node via kubectl , the scheduler will then orchestrate the deployment process.

The key here with Kubernetes is that it offers a control plane. Through the api call I call tell the desired state like 3 replicas. Kubernetes will always make sure that I have 3 instances of this application running on my cluster.

Deploy a newer version

Create a new file and name it demo-blue-green.yml or edit the previous manifest file itself.

To perform the deployment, just do a kubectl create again.

kubectl create -f demo-blue-green.yml

This will create 3 additional pods with version green as specified in the manifest file.

Now we have 6 pods running on the cluster. How can I access them?

Using Services to access the application

Pods are great, whenever a pod is create, it gets a new ip.

However, we cannot rely on those Ips as when a pod crashes for example and another one is created, it will have a new ip assigned to it. We just cant relies on those backend Ips.

Instead, Kubernetes provide another resource called a service.

A Service provides a stable ip and a dns name that load balance request to the back ends.if some pods dies, it updates itself with the details of the new pods. If scale up, all new pods ip gets added to the service endpoints via a service discovery mechanism.

Creating a service that points to blue version

Create a yml file and name it demo-service.yml. Then do a kubectl create to create the service object.

kubectl create -f demo-service.yml

The most important thing here is the Label Selector. As you can see in the below diagram, a service load balance request to the pods that have the matching label selector. In our case, we have label blue and prod.

Updating a service that points to Green version

We just need to change the version label to green. Then do a kubectl applywhich will update the existing service and change its label.

kubectl apply -f demo-service.yml

As you can see in the below diagram, the payment service will now starts to load balance its request to the green version as we have just changed the label from blue to green.

Now you can see how powerful labels are in the world of kubernetes.

You can do all types of stuffs like AB testing, blue green deployments as well as canary deployments.

Performing a canary deployment with weight 50% on each version

To perform a canary deployment in Kubernetes, we just need to play with the label selector and the number of replicas an application have.

For example, to shift traffic 50% on each version of our application, update your service yml file as below :

What have we done?

We have just removed the label version from the yml file. That way, it will load balance requests to both version blue and green.

I want to send 1% traffic to green version!

Ideally to be able to send 1% traffic to green version you will need to play with the number of replicas your application will have. For the example able above, we have 3 replicas on blue and 3 on green. Therefore we can say that 50% traffic is going to blue and green.

But for 1% traffic shifting, we will need like 99 replicas on blue and 1 replicas on green only then we can say 1% traffic is going to green version.

However on kubernetes we cannot assign weight directly on the service object.

Conlusion

Kubernetes is a great container orchestrator. You can do lots of stuff like blue green deployments, rolling updates etc..

However when you want to perform things like canary deployment, it becomes a bit difficult with Kubernetes alone to perform those types of deployments.

That why you can use a service mesh which can help.

Service meshes adds visibility, control, and reliability to your application with a wide array of powerful techniques: circuit-breaking, latency-aware load balancing, eventually consistent (“advisory”) service discovery, deadline propagation, and tracing and instrumentation.

--

--