Zero to Kubernetes

Tiroshan Madushanka
zero-to
Published in
4 min readMay 4, 2018

--

Nightmare

Managing applications, scaling those with rising requests, and checking application health are a nightmare in the early days. Even this gets worst with production environment deployment.
We have to have downtime in a typical application environment while progressing with production deployment. There are better solutions than having downtime for a long-running, high-traffic application.

But managing a separate environment for deployment and switching live traffic to the updated environment is widely used in production. Managing two environments with the same resources is costly; sometimes, managing those with CICD takes a lot of work.

Also, scaling the application according to live traffic, continuously checking the heartbeat of the application, protect secrets/configurations are much more challenging with complex applications.

Life Saviour — Kubernetes

Kubernetes is an open-source system for automating deployments, scaling, and managing containerized applications.

Keynote of Sam Ghos, Co-founder, box @ KubeCon

Let's try understanding how Kubernetes can help us with our application development with its features.

1. Manage Deployments

Managing application deployments is a simple task with Kubernetes. The only thing we need to do is, configure our deployment then Kubernetes will take care of the rest.

Recreate

Type Recreate will terminate all the running instances and recreate them with the newer version. This is good for the development environment since there will be downtime based on the application.

Recreate deployment

Ramped

Deployment updates the application in a rolling update way, where a secondary replica set is created, increasing instances with the new version while decreasing the number of old instances. Traffic will be routed to the old version as well as the new version through the load balancer.

Ramped deployment

Blue/Green

This differs from a Ramped deployment because the updated version of the application is deployed alongside the current application instances. After checking updated instances ( health and readiness), the Kubernetes load balancer routes traffic to the updated instances.

Blue/green deployment

Canary

The deployment consists of routing a subset of users to an updated instance. In Kubernetes, this can be done using two Deployments with common pod labels. One replica of the new version is released alongside the old version. Then after some time, and if no error is detected, scale up the number of replicas of the new version and delete the old deployment.

Canary deployment

A/B Testing

Routing requests of selected users to the updated instance. This is more of a business decision-making approach rather than a deployment strategy. In Kubernetes, we can achieve this by using a service mesh like Istio.

A/B testing. Testing features to a subset of users

2. Horizontal Scaling

Provide simplicity in scaling an application up and down with a simple command ( through UI ), or we can configure Kubernetes to scale based on CPU usage automatically.

3. Service discovery & Load balancing

In Kubernetes, we do not need to worry about service discovery and service mapping. Kubernetes provides different service levels, and each container is allocated with its IP address and can provide a single DNS name for a set of containers. Kubernetes provides internal load balancing through Kube-proxy, allowing us to use external load balancers like cloud LB, Ingress, NodePort, external IP, and service LB.

Kubernetes service types,

Kube-Proxy — (Internal)

Kube-proxy is an internal Kubernetes load-balancing mechanism.

NodePort — (External)

Kubernetes master will allocate a port from a flag-configured range (default: 30000–32767), and each Node will proxy that port which is the same port number on every Node, into our service

Load Balancer — (External)

Cloud providers support external load balancers, which provision a load balancer for our service, and the LB creation is asynchronous. Traffic from external LB will be directed at the relevant back-end pods. Through this, we can expose our services externally.

Ingress

Kubernetes Ingress is a collection of rules which allow inbound connections to reach cluster services.

--

--

Tiroshan Madushanka
zero-to

Cloud, Distributed Systems, Data Science, Machine Learning Enthusiastic | Tech Lead- Rozie AI Inc. | Research Assistant - NII |Lecturer - University of Kelaniya