DevSecOps for Microservices with Resilience

Ashish Jadhav
Engineering Jio
Published in
13 min readDec 1, 2019

The Information Technology teams are under continuous pressure to deliver the business and customer values with faster and frequent releases.
The next generation architectures on Microservices with a DevOps are designed to offer greater agility and operational efficiency for the enterprises.

Understanding of Microservices:

What ?

  • Microservice is a software development architecture based on domain driven design.
  • Microservice architecture is about breaking the product or application into independent services so that it can be deployed and managed solely at each service level and doesn’t impact or have dependency on other services.
  • Microservices is developed end-to-end and encapsulated in a small service which runs its own processes and communicates with other services using Restful or Asynchronous queue-based mechanism. The application layer, business logic, database, and infrastructure code is strongly encapsulated with the purpose of fulfilling a single deployable unit.

Why ?

  • The Microservices are designed being focused on specific business outcome.
  • They are Independently deployable, Independently scalable & strongly encapsulated.
  • They can be developed independently by different teams.
  • They can be developed using different programming languages.
  • Microservices own their own data.
  • Instead of having to orchestrate a large release to offer new features like in Monolithic, a new business capability can be added by simply adding a new service with Microservices.

Business Benefits:

  • Allow a company continue to innovate and stay agile as products get larger.
  • They enable company to be ahead of competitors by quickly releasing a feature by independently deploying a service.
  • You can quickly remove non value added feature by rollback of independent service.
Application based on Microservices

So let us take Good Cinema example which I found on Internet that demonstrates the use of Microservices for a movie theaters. Above Cinema architecture will tell you ; We can break each service down as an independently deployable module. Each module can be written in different languages (e.g. Java, .Net, PHP etc.) Each has its own maintenance, monitoring, application servers, and database. So with Microservices, there is no centralized database — each module has its own database.

Why DevOps for Microservices?

The objective of Devops is Faster and Frequent Releases.
DevOps is a methodology that enables developers, InfoSecurity and IT Ops to work closer together so they can deliver better, secured and quality software faster. DevOps gives developers visibility into IT Ops and vice versa, which reduces the traditional friction between the two roles and increases collaboration.

  • If you don’t start with DevOps, teams can become less independent than they were before. Productivity can grind to a halt because development teams can’t deploy their code. Quality Assurance teams could become bottlenecks because of all the untested code hitting their backlogs at once.
    Defect counts would soar because issues are not reproducible. Developers would have to develop with many services running at once because of the lack of automated testing. This would be highly cumbersome, especially with the addition of new services and new team members. Developers would have to interrupt each other constantly to help others troubleshoot and run their services.
  • The solution is divided into several independent “LOB” and each “LOB” has a set of Microservices and UI.
  • Each Microservice has own life and release cycle.
  • Different (even competing) teams work on the project.
  • A Microservice from one “LOB” has a run time dependency on a Microservice from another “LOB.”
  • Microservices depend on the ecosystem (the set of commonly used services, logging, monitoring etc.).
  • The situation becomes more complicated when each Microservice has its own deploy and run time configuration per environment, for example, no need to install multiple instances of a Microservice in a test environment, but it is required for production.
  • Deploy/rollback procedure for the entire solution shall be as simple as possible and avoid the complexity of dealing with different Microservices, that all have different versions.

Cloud Native for Microservices

Cloud native is a term used to describe container-based environments. Cloud-native technologies are used to develop applications built with services packaged in containers, deployed as Microservices and managed on elastic infrastructure through agile DevOps processes and continuous delivery workflows.

Containers for Microservices

Containers are isolated workload environments in a virtualized operating system. A container consists of an application and all the dependencies and Libraries needed to run it, and packaged together in executable format. As they are independent environments, They run on it’s own process & they aren’t tied to software on physical machines, providing a solution for application-portability. Containers also speed up workload processes and application delivery because they can be spun up quickly.

Container-based Microservices applications in production environments can better respond to erratic workloads. Shorter container initiation times can help increase user satisfaction and improve the financial performance of revenue-generating applications.

In the above Image you can see you can spin up the Container-based Microservices applications hello-world with single CLI command.

DevSecOps Pipeline for Microservices

DevSecOps Pipeline for Microservices

DevOps tool chain constitutes:

  • Code — code development and review, source code management tools, code merging
  • Build — continuous integration tools, dependency repository, build status
  • Test — continuous testing tools (Junit, sonarqube, fortify etc.) that provide feedback on business risks
  • Package — artifact repository, docker images,application pre-deployment staging, image repository
  • Release — change management, release approvals, release automation
  • Configure — infrastructure configuration and management, Infrastructure as Code tools like Ansible, Terraform etc.
  • Monitor — applications performance monitoring, end–user experience
Microservices DevOps Orchestration

Microservices DevOps Approach

Tools can Change, However Focus on DevOps Pipeline Capabilities Instead:-

  • Our goal should be to build a solution for each component in the pipeline knowing a better solution would replace that later. When that happens, it would be a simple swap of that component without impacting the whole pipeline.
  • We should focused on the capabilities of Microservices DevOps pipeline components and developed an operationally simple solution for each component.
  • Decouples app layer from infrastructure.
  • Multi module project for dependent Microservices.

As shown in the Diagram above,

  • The stack includes Github for SCM, Maven for Build, Jenkins for coordination, Nexus for Code Repo, Docker Registry for Image Repository can be standard enterprise DevOps tooling.
  • Docker packages Microservices that can be run as software containers on any machine.
  • Using Jenkins, perform automated integration tests on the containerized services
  • Docker Swarm as Container Scheduler and orchestration solution.
    The Netflix Eureka for service discovery, Netflix Zuul is used as a API Gateway, Ribbon for client side load balancing, Hystrix Feign for service communications and resilience/circuit breaker.

CI/CD Pipeline WorkFlow with Kubernetes

The goal is to automate the following process:

CICD pipeline workflow with kubernetes
  • Checkout code
  • Compile code
  • Run test cases
  • Build docker images
  • Push images to docker registry
  • Pull new images from registry
  • Deploy the app on Kubernetes
  • Kubernetes’ zero-downtime deployment
    As part of a rolling update, Kubernetes spins up separate new pods running your application while the old ones are still running. When the new pods are healthy, Kubernetes gets rid of the old ones

Kubernetes Deployments

Setting up applications On Kubernetes Cluster, we will do the following:

  • Create a Namespace
  • Create a deployment yaml and deploy it.
  • Create a service yaml and deploy it.
  • Access the application outside the cluster on a Node Port Or
  • Create a Ingress yaml for the service-name and deploy it to access the services outside the cluster.

Deployments manage the deployment of replica sets and are also capable of rolling back to the previous version.
We have got a controller in the Kubernetes master called the deployment controller which makes it happen. It has the capability to change the deployment midway.

Methods Exposing Services

Service exposing methods

There are different methods to get external users traffic into the kubernetes cluster. I will try to explain each one of them and when to use what method.

ClusterIP
ClusterIp is the default method to access kubernetes service. This way the service can be access within the cluster of kubernetes however you may not able to access them externally i.e. from outside the cluster.

The deployment service structure for ClusterIP

apiVersion: v1.1
kind: Service
metadata:
name: hello-world-internal
spec:
selector:
app: hello-world
type: ClusterIP
ports:
— name: http
port: 80
targetPort: 80
protocol: TCP

The above ClusterIP method was to access service from within the kubernetes cluster, how can i make the service expose to the internet and outside the cluster.

You can access same using the Kubernetes proxy.

You need to start the kubernetes proxy
kubectl proxy — port=8080

you can access the service using following URL

http://localhost:8080/api/v1/proxy/namespaces/default/services/hello-world-internal:80/

NodePort

A NodePort is an open port on every node(VM”s) of your cluster. Kubernetes transparently routes incoming traffic on the NodePort to your service, even if your application is running on a different node. The traffic that is sent to this port is forwarded to the service.

The deployment service structure for NodePort

apiVersion: v1.1
kind: Service
metadata:
name: hello-world
spec:
selector:
app: hello-world
type: NodePort
ports:
— name: http
port: 80
targetPort: 80
NodePort: 30000

Here in the NodePort , you should specify type as NodePort and the NodePort number to be opened on the nodes. If you don’t specify the port, kubernetes will pick a random port. You can have port per service and can only use ports from 30000–32767.

LoadBalancer
Load balancing is a relatively straightforward task in many non-container environments, but it involves a bit of special handling when it comes to containers. There are two different types of load balancing in Kubernetes — Internal load balancing across containers of the same type using a label, and external load balancing.

A LoadBalancer service is the way to expose a service to the internet. The hardware loadbalancing will give you the internet IP address that will forward the traffic to your service and you can send any kind of traffic like HTTP, TCP, UDP, Websockets, gRPC etc. to it.

Ingress
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.

The deployment service structure for Ingress

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
backend:
serviceName: other
servicePort: 8080
rules:
— host: cars.domain.com
http:
paths:
— backend:
serviceName: cars
servicePort: 8080
— host: domain.com
http:
paths:
— path: /bikes/*
backend:
serviceName: bikes
servicePort: 8080

As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields. The Ingress will let you do path and subdomain based routing to backend services. For example, you can send everything on cars.yourdomain.com to the cars service, and everything under the domain.com/bikes/ path to the bikes service.

Kubernetes Manifest for Deployment, Service & Ingress

Deployment Manifests on Kubernetes

The deployment configuration, replicas specifies the desired number of replicated pods with the label same as matchLabels directive which is in key and value format.

Release and Deployment Models

We will discuss two popular deployment strategies Blue-Green Deployment and Canary Deployment.

Blue-Green Deployment:

A Blue/Green deployment is a way of accomplishing a zero-downtime upgrade to an existing application. The “Blue” version is the currently running copy of the application and the “Green” version is the new version. Once the green version is ready, traffic is rerouted to the new version.
The user will experience no downtime and will seamlessly switch between blue and green versions of the application.

For this, we need to perform a blue/green deployment. With blue/green deployments a new copy of the application (green) is deployed alongside the existing version (blue). Then the ingress/router to the app is updated to switch to the new version (green). You then need to wait for the old (blue) version to finish the requests sent to it, but for the most part traffic to the app changes to the new version all at once.

In Kubernetes the concept of blue and green is slightly different and can be achieved by number of ways, one of the way is blue and green versions are actually a set of containers and the best way to do it is create a new deployment and then update the service for the application to point to the new deployment.

Blue Deployment: We will create our “blue” deployment yaml file deployment_blue.yaml

apiVersion: v1.0
kind: Deployment
metadata:
name: nginx-1.0
spec:
replicas: 5
template:
metadata:
labels:
name: nginx
version: “1.0”
spec:
containers:
— name: nginx
image: nginx:1.0
ports:
— name: http
containerPort: 80

We now have deployment and need to create service to access the instances of the deployment. In the service specify a label selector which is used to list the pods that make up the service.

We will create service for the deployment_blue instances , so let us create the service.yaml file.

apiVersion: v1.0
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
ports:
— name: http
port: 80
targetPort: 80
selector:
name: nginx
version: “1.0”
type: LoadBalancer

now the service is created to access the deployment_blue instances and to do so we have mentioned a label selector with labels, name=nginx and version=1.0.

Green Deployment: We will create our “Green” deployment yaml file deployment_green.yaml

apiVersion: v1.1
kind: Deployment
metadata:
name: nginx-1.1
spec:
replicas: 5
template:
metadata:
labels:
name: nginx
version: “1.1”
spec:
containers:
— name: nginx
image: nginx:1.1
ports:
— name: http
containerPort: 80

We now have deployment and need to create service to access the instances of the deployment. In the service specify a label selector which is used to list the pods that make up the service. Now we have two deployments but the service is still pointing to the “deployment_blue” .

We will modify the service to access deployment_green instances , so let us modify the service.yaml file.

apiVersion: v1.1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
ports:
— name: http
port: 80
targetPort: 80
selector:
name: nginx
version: “1.1”
type: LoadBalancer

now the service is created to access the deployment_green instances and to do so we have mentioned a label selector with labels, name=nginx and version=1.1 so you should see that the new version of nginx is serving traffic.

Canary Deployment:

Canary deployment is similar to soft or beta release of functionality into production which is available for certain group of users, specific users or deployed on specific servers in the cluster. It consists of letting only a part of the users get access to the new version of the application, while the rest still access the “old” version. This is very useful when we want to be sure about stability in case of any changes which may be breaking, and have big impact on the business/application.

Canary Deployment

Canary releases are based on the following assumptions:
Multiple versions of your application can exist together at the same time, getting live traffic.

Using Ingress to achieve the Canary Deployment model
An Ingress resources is a collection of rules that allow inbound connections to reach the cluster services.
It can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name based virtual hosting, and more.
Our objective with Ingress resources would be to split the traffic of the Canary deployment to be accessible on a domain/sub domain.

we have seen in the above section of Deployment Manifests in Kubernetes how to create manifests for deployment, service and define rules and routing using ingress. Following modified ingress file will show you how to manage the canary type of deployment.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-nginx-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: “true”
nginx.ingress.kubernetes.io/canary-weight: “10”
spec:
rules:
— host: cars.domain.com
http:
paths:
— backend:
serviceName: my-nginx-service
servicePort: 80

In the above Ingress manifest nginx.ingress.kubernetes.io/canary: “true” tells ingress-nginx to treat this ingress differently and mark it as “canary”.
As you can see nginx.ingress.kubernetes.io/canary-weight: “10” is what tells the ingress-nginx to configure Nginx in a way that it proxies 10% of total requests destined to cars.domain.com to my-nginx service.

Using Istio

In the istio cluster, we need to define the routing rules to configure the traffic distribution. We should create the two sets of deployments and mentioned a label selector with labels and two versions v1 and v2, both would be separate deployment objects with separate label selectors. Both v1 and v2 deployments would be exposed via the same service object which would point to their pods.(*I have already explain same in blue-green deployment manifests)

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx
spec:
hosts:
— nginx
http:
— route:
— destination:
host: nginx
subset: v1
weight: 90
— destination:
host: nginx
subset: v2
weight: 10
— -
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginx
spec:
host: nginx
subsets:
— name: v1
labels:
version: v1
— name: v2
labels:
version: v2

There are also other types of methods to achieve canary type of deployment.

Hope you have enjoyed reading the CICD automation for cloud native applications. To make the article more inclusive I have also put few references from the internet sources. Thank you to the respective authors.
You can also watch my YouTube video https://youtu.be/hNwwXINBFJY on the similar topic presented during global DevOps summit.

You can also read my published articles on LinkedIn and https://ashishjadhavtechnologyforum.tumblr.com/.
Thank you for your time and reading.

--

--

Ashish Jadhav
Engineering Jio

Ashish is well known open source technology leader with expertise into Microservices, Blockchain & DevOps. He is Speaker & Blogger on open source technologies.