Aspects of Cloud Native Microservice Architecture

Safvan Kothawala
4 min readMar 3, 2020

--

Cloud Native technologies empower organizations to build and run scalable applications in modern, dynamic environments like public, private and hybrid clouds.

A Cloud Native Microservice Architecture is an architectural style that structures an application as a collection of services that are

  • Highly maintainable and testable
  • Loosely coupled
  • Independently deployable
  • Organized around business capabilities
  • Owned by a small team

Implementation of Microservice Architecture results in

  • Increased Resilience
  • Improved Scalability
  • Enablement of Canary Deployment & Dark Launches
  • Faster Time to Market with rapid, frequent and reliable delivery

The following image describes the journey to convert a Monolithic Application to a Cloud Native Microservices compatible Architecture.

Journey to Cloud Native Architecture

Aspects of Cloud Native Microservice Architecture

Following are the different aspects of Cloud Native Microservice Architecture:

1. Decomposition

Monolith Application has to be decomposed in to multiple independent, lightweight applications communicating with each other over REST (sync) protocol or via Queue (async).

2. Containerization and Orchestration

Such decomposed application should be packed along with the runtime infrastructure and to be containerized (using Docker) for consistency across multiple environment with ease of shipment.

Orchestration Platform (like Kubernetes) should be used for containerized deployment of an application over cluster through central management center.

Platform should be used for Auto healing of application instance to increase resiliency of Microservices.

Platform should be used for auto scaling of application instances independently without any service interruptions bases on CPU, Memory utilization limits or based on custom KPIs.

3. Service Mesh

A Service Mesh should be created in order to control service-to-service communication between Microservices

Frameworks such as Istio can be used to create and manage the Service Mesh

Traffic management feature of Istio can be used for implementing Routing Rules, Retries, Timeouts, controlling communication to external systems, TLS Origination & Termination etc. No application restart is required if any changes are made in them

Circuit breaking feature of Istio can be used for creating resilient applications which limit the impact of failures and any other undesirable effects of network peculiarities

Rate limiting feature of Istio can be used for implementing Overload protection without making any changes in the application

Fault injection feature of Istio can be used to mock failure scenarios in our testing lab

Canary deployment & Dark launches can be performed using Istio without any downtime

Istio is used for automatic recording of metrics and traces for all traffic within a cluster, including cluster ingress (incoming) and egress (outgoing)

Topology diagram of current services participating in Service Mesh should be derived using some tool

4. Observability, Logging & Analysis

Centralized Logging should be implemented to manage and analyze the logs of application running multiple containers across cluster.

A centralized metrics server should be used to store all Microservice application time series metrics.

A metric dashboard utility should be used to design and display all application and system metrics in graph format.

Application should be able to generate Business and SLA related Alerts from a centralized tool.

A centralized tracing utility should be present to trace a request involving different Microservices. A trace should cover the span of individual Microservice taking part in the request.

5. Asynchronous messaging

A centralized Queueing mechanism should be implemented to enable event driven architecture across all Microservices.

No Microservice should implement Queuing service by itself making it stateful.

Queueing Service should be implemented in such a way that no data loss occurs in any kind of failure.

6. Centralized Caching

Centralized caching should be implemented to store the cache of all Microservices.

No Microservices should implement Cache Service by itself making it stateful.

The caching framework should perform similar to an in-memory cache.

7. API Gateway

API Gateway should be used to publish, route and load balance all external APIs and Portals.

Any incoming request from outside of Kubernetes cluster should be ingressed via API Gateway only.

TLS Termination should be done using API Gateway only such that no changes are required in application side for implementing HTTPS.

8. CI/CD Process

Individual CI/CD process should be created for each Microservice

Each process should run independently without affecting other Microservices

Each Microservice should have its own release version such that each Microservice can be independently deployed and released as per its release plan

Helm Charts (feature of K8s) can be created to define, install and upgrade the Microservice applications leading to ease of deployment

9. Testing

Unit test cases should be written for each Microservice.

Integration test cases should also be written for testing a business scenario which involves more than one Microservice.

Kindly refer The-Twelve-Factor App for more details:

https://12factor.net/

--

--