Kubernetes for Scalability in Microservices

Deploying, Scaling, and Updating Microservices

AYSHI BHOWMIK
IoT Lab KIIT
11 min readJul 9, 2021

--

This article is a small try to dive into the world of Kubernetes and Microservices. Here we try to describe how modern, always-on applications use the microservices design pattern and many more. Hope this article will be helpful to you. Without further ado, let’s get started.

What are Microservices ?

Microservices are just an architectural approach to designing applications in a way that’s modular, easy to deploy, and scale independently.

Benefits of microservices design pattern :

  • It applies to any application, even a traditional web application.
  • Microservices just happen to benefit more from rapid deployments and continuous delivery.
  • Microservices also push the limits of most automation tools and infrastructure in place today to their limits.

Microservices are one of the reasons we need advanced tools, like Kubernetes.

Build And Interact With Monolith :

Throughout this article, we are using an example application called app. In this app, we have three types of applications: Monolith, Hello, and of Service. The Monolith service combines authentication and Hello microservices. Run the below-mentioned commands to run the monolith application and see how it works.

Now monolith application is built and we can test this functionality by using this curl command : $ curl https://127.0.0.1:10080 After running the curl command we will get a response back : {"message" : "Hello"}

At this point, if we try to run the secure endpoint command it will not work at all and it will show authorization failed. The request fails as we need to get a JWT token from the log-in endpoint.

A monolithic application does leverage many principles found in 12-factor applications.

Twelve-Factor Apps :

It is one of the most important concepts for designing scalable applications, twelve-factor apps. It is a best practice for building deployable software-as-a-service apps and for designing modern applications that fit three really important criteria: Portability, deployability, and scalability.

Three Principles are :

  • Portable: Twelve-factor applications are designed to be portable as they focus on eliminating elements that vary between execution environments like dependencies and configuration.
  • Deployable: Twelve-factor applications are designed to be deployed on cloud platforms like GCP, AWS and by focusing on keeping production in development environments close to the uniform we can start continuously deploying our applications.
  • Scalable: Twelve-factor applications are designed to be scalable to scale up without significant changes to tooling, architecture, or development practices.

When building any application if we follow these principles, they will be able to scale up to reach users’ demands, while using most of the same toolings and practices we have been employing from the start.

Twelve-Factors are :

Refractor To MSA (Microservices Architecture) :

Now let’s break down monolith applications into microservices. First of all, create an auth and hello service.

In a new shell tab, do the same for auth service.

Hence both hello and auth services are running and we can use the curl command. Now we have multiple binaries to manage for our application, our deployment has been made twice as complex, and clients need to know how to talk two separate services. The additional complexity only grows as the number of services for an application increases. This problem is exactly what’s driving the adoption of application containers and management platforms, like Kubernetes to coordinate them.

JSON Web Tokens (JWT) :

JWT is a useful standard because it sends information that can be verified and trusted with a digital signature. Additionally, JWTs are a compact means of storing data that’s easy to decode or encode in most languages. In a word, it is a compact, self-contained method for transferring secure data as a JSON object.

Uses: JWTs are ideal for authentication and information exchange because of their size and the fact that they can be signed. Here JSON web tokens are used to authenticate our microservices.

How does it work :

  • Get/login with username and password
  • The server creates a JWT token
  • The server returns JWT to the client
  • The client sends a copy of the JWT when making a request
  • Server checks JWT signature
  • The server sends a response to the client

Kubernetes :

Kubernetes provides a new set of abstractions that go well beyond the bases of container deployments, and enable you to focus on the big picture. Previously our focus has been deploying applications to individual machines which lock you into limited workflows. Kubernetes allows us to abstract the way in the individual machines and treats the entire cluster like a single logical machine.

The easiest way to get started with Kubernetes is to use a kubectl detail run and command. This command is used to run to launch a single instance of the Nginx container.

In Kubernetes, all containers run in what’s called a pod. Use the kubectl get pods command to view the running Nginx container. Now Nginx container is running. We can expose it outside of the Kubernetes using the command :

$ kubectl expose deployments nginx — port 80 — type LoadBalancer

Behind the scenes, Kubernetes created an external load balancer with a public IP address attached to it. Any client who hits that public IP address will be routed to the pods behind the service. In this case that would be the Nginx pod.

Pods :

Pods represent a logical application. Pods represent and hold a collection of one or more containers. Generally, if you have multiple containers with a hard dependency on each other they would be packaged together inside of a single pod.

Pods also have volumes. Volumes are just data divs that live as long as the pod lives and can be used by any of the containers in that pod. This is possible as pods provide a shared namespace for contents. It means that two containers inside a single pod can communicate with each other and they also shared the attached volumes. Pods also share a network namespace. This means that a pod has one IP per pod.

Creating Pods :

Pods can be created using pod configuration files.$ cat pods/monolith.yaml running this command it will be seen that our pod is made of one container, the monolith and when it starts up few arguments are also there to the container. Lastly, there is port 80 for HTTP traffic, and port 81 for health checks.

By running the kubectl describe the command you will see a lot of information about the monolith pod, including the product IP address and the event log. At the time of troubleshooting, this information will come in so useful. Kubernetes makes it easy to create pods by describing them in configuration pods and viewing information about them while they are running.

Interacting With Pods :

To map a local port to a port inside the monolith pod use the command that is : $ kubectl port-forward monolith 10080:80 Use two terminals: one to run the kubectl port-forward command and the other to issue kernel commands. Here the command is used to view the logs for the monolith pod. Now in another terminal use the -f flag $ kubectl logs -f monolith to get a stream of logs happening in real-time. We can use the kubectl exec command $ kubectl exec monolith — stdin — tty -c monolith /bin/sh to run an interactive show inside the model of the pod. This comes in handy when you want to troubleshoot from within the container.

For example, once we have a shell into the mile of container we can test external connectivity using the ping command ping -c 3 google.com Then when you are done with the interactive show, be sure to log out. Interacting with PAS is as easy as using the kubectl command, whether you are trying to hit your containers remotely or trying to get a login shell for troubleshooting. Kubernetes provides everything you need to get up and going.

Monitoring And Health Checks :

Kubernetes has built-in support to make sure that your application is running correctly with user implemented application health and readiness checks. Readiness probes indicate when a pod is ready to serve traffic. If a readiness check fails then the container will be marked as not ready and will be removed from any load balancers. Liveness probes indicate a container is alive. If aliveness probes fail multiple times then the container will be restarted.

For the healthy-monolith pod, the readiness probe is performed by : $ cat pods/healthy-monolith.yaml

For the healthy-monolith pod how often the readiness is checked : $ kubectl describe pods healthy-monolith | grep Readiness

For the healthy-monolith pod how often the liveness probe is checked : $ kubectl describe pods healthy-monolith | grep Liveness

Secrets And Config maps :

Config maps and secrets are similar except config maps don’t have to be sensitive data. They can use environment variables and they can tell downstream pods that configuration is changed along with a pod or restart itself if necessary.

Secrets are easy to create from a file kubectl create secret command $ kubectl create secret generic tls-certs --from-file=tls/ Then we can upload that secret to a pod. After running the kubctl create pod command, a pod is created. Then our secret mounted onto our pod as a volume. This way Kubernetes can make sure that our configs are there before our container start.

Creating Secrets :

Here we will create a new pod name secure monolith. Secure monolith secures access to the monolith container using nginx which will serve as the reverse proxy serving HTTPS. The nginx container will be deployed in the same pod as a monolith container as it’s tightly coupled.

The cert. pem and the key.pem files will be used to secure traffic on a monolith server and the ca. pem will be used by HTTP clients as a CA to trust. Next, we will use kubectl to create “tls-certs” secrets from the tls certificates stored out of the tls directory. kubectl will create a key for each file in the tls directory under the tls cert secret book. Use the describe command to verify this. The secure monolith pod requires an Nginx config to handle the scene reverse proxy.

Next, we need to create a config map entry for the proxy.com Nginx configuration file using the kubectl create config map command. Use the describe command to get more details about the Nginx-proxy-conf config map entry. At this point, we are ready to attach the Nginx configuration files and tls certificate for the secure monolith pod.

Services :

It is mainly a persistent endpoint for pods. The pods that are service exposes are based on a set of labels. If pods have the correct labels, they are automatically picked up and exposed by services. The level of access the service provides to a set of pods depends on the type of the services. Currently, there are three types: the cluster IP which is internal only. There’s a tight node port that gives each node an external IP that’s accessible. And then there’s a type of load balancing that adds a load balancer from the cloud provider.

Deployments :

Deployments are a declarative way to say what goes where means deployments drive the current state towards the desired state. Deployments use the Kubernetes concept called replica sets to ensure that the current number of pods equals the desired number.

Create deployments :

Now we are ready to create deployments, one for each service. The front end, auth, and hello. Then will define internal services for the auth and hello deployments. and external service for the front-end deployment.

It is ready to interact with the front end by grabbing this external IP address and using curl to hit it. Now the multi-service application has been deployed using Kubernetes.

Scaling and create scaling:

Scaling is done by updating the replicas field in our deployment manifest. Deployments create a replica set to handle pod creation, deletion, and updates. Deployments own to manage the replica sets for us.

Now at the endpoint, we have multiple copies of our Hello service running in Kubernetes and we have a single front-end service that is proxying traffic to all three pods. This allows us to share the load and scale our containers and Kubernetes.

Conclusion :

The primary strength of Kubernetes is its modularity and generality. Nearly every kind of application that you might want to deploy you can fit within Kubernetes, and no matter what kind of adjustments or tuning you need to make to your system, they’re generally possible.

THANK YOU FOR YOUR PATIENCE IN READING THE ARTICLE.

NEXT TIME WILL COME UP WITH MORE EXCITING ARTICLES.

You can connect with us on Instagram (Ayshi), Instagram (Arnab) and LinkedIn (Ayshi), LinkedIn (Arnab) if you need more help. We would be more than happy.

Good Luck 😎 and happy learning 👨‍💻

Written By : Ayshi Bhowmik & Arnab Dan

--

--