Advancements in Microservices Management

The LTTS Editorial Team
TS Tech

--

Microservices are an evolution from service based architectures enabled by maturity of containerization and orchestration technologies. The services are designed using bounded context of business domain, i.e., Domain Driven Design (DDD) approach.

Software engineering practices have evolved over the years to include agile development practices which support iterative development with small focused teams. The agile processes merge well with the microservice architecture paradigm, supporting the design of granular services.

Understanding Microservices

The cloud infrastructure for virtualization and infrastructure management makes it possible to deploy large number of microservices and helps automate the process.

Individually the microservices are designed according to the bounded context of a business function, for instance, for an ecommerce solution - adding item to cart, placing an order can be visualized and implemented using two different services. These services are typically exposed using Rest API, RPC, or event messages, for use by external web clients as well as internally between the services.

These services are deployed on a network using Docker containers or Kubernetes pods, and are managed and orchestrated using tools like Docker swarm and Kubernetes. Observability of the services is provided through a side car and service mesh.

Microservices can be developed using any coding language and are database agnostic. Each service typically can manages its own database (e.g., SQL, NoSQL).

Advantages of Using Microservices:

1)Modularity:

  • Saves time from not having to rewrite the whole application; instead, individual services can be updated
  • Enables addition of new features as microservices which can be plugged into existing applications

2)Easy Maintainability:

  • Dividing the codebase into smaller granular services, enhancing maintainability

3)Auto Scale:

  • Offers freedom to scale services independently, enabling system architects to offer higher availability

4)Easy to Deploy:

  • Containerized services can be deployed using automated tools
  • Only the needed codebase can be deployed

5)System Resilience

In event of a service malfunction, only few components are affected. While the entire application is not impacted, there is still a possibility for cascading effects. These can be mitigated through canary tests and chaos tests in production environment, and controlled through continuous monitoring and implementation of a recovery strategy.

And Some Challenges (in terms of management, orchestration and observability)…

1)Managing Microservices:

  • These deployment need to support a polyglot environment
  • At scale with high number of microservices, it becomes challenging to manage operations

2)Inter Service Communication:

  • A reliable and fast communication channel is required between microservices to interface with each other

3)Unstructured Data logging:

  • Each service has its own logging mechanisms and a large amount of unstructured log data, which can pose a challenge in data analysis

4)Identifying Root Causes:

  • Distributed logic and data increases the effort involved in locating root causes, resulting in increased maintenance and recovery times

5)Service Updates:

  • Components in major software solutions can become outdated, making it essential to find a way to update service easily without affecting other components

Service Mesh for Microservices

Microservices Service Mesh

The service meshes use side car pattern for microservices deployment. It includes common tools useful for the management of microservices. The service mesh enables internal Service to Service communication and unified control and management architecture for microservices.

This is possible with the help of tools aggregated together for communication between services (proxy, load balancing, circuit breaker), observability of services (logging) and platform management (production/staging/test deployments).

Advantages from Using a Service Mesh:
• Modular services, improves team management, and fits agile processes
• Resilient microservice architecture
• Abstraction from control elements/non-function requirements
• Improves maintainability by segregation of operational components into side car maintained by single operations team

Challenges in Adding New Components to a Microservices Deployment:
• Additional costs associated with deployment
• High network traffic for frequent API calls
• Additional resources, software, and tool to implement the service mesh
• Additional team skills and dynamics — a team of “meshers” who merge skills of DevOps, networking, and security

Service Mesh Architecture

The real utility of a services mesh is realized if the organization has an application or platform based on microservices that needs to run at scale and where organization have a duplication of resources requiring operational effort to run services in production, staging, and test networks.

The service mesh approach encapsulates the tools for testing, communication, and deployment and helps provide seamless migration of the tools when upgrading. Also, it leads to faster time to re-deploy in a new operational environment.

The operation team has a better control and visibility of operations and lesser overheads through standardization of operational tools. Developers would also find that they have a standardized logging mechanism which helps them with debugging.

Event-driven Service Mesh

A service mesh can abstract the inter-service HTTP based REST API calls and send them on a broker or event bus implemented in a pub-sub pattern on a messaging service like AMQP or Apache Kafka.

The event-driven service mesh needs two important components as part of the side car - a protocol translator and an HTTP bridge side car to convert synchronous HTTP calls into asynchronous event messages.

Event-driven Service Mesh

An event architecture based service mesh combines the advantages of service mesh, i.e., decoupling, unified tools for security, and observability, together with advantages of an event based architecture to provide better elasticity, de-coupling and flexibility.

Sending the messages over a message bus also has added benefit of providing visibility into the data being sent between service, and stream analysis is possible on a common platform.

Interface Standardization and Management Tools

Standardization happens at two levels:

  • Service Mesh Tools
    Open source cloud communities tools such as The Cloud Native Computing Foundation (CNCF) are emerging as de-facto set of services mesh tools to orchestrate, control and monitor microservices
  • API for Service Mesh
    There are efforts towards standardizing service mesh API by the industry consortium — Service Mesh Interface

One of the key problems within the space of operations management of cloud solutions and platforms using microservices architecture is to have a cost effective and network efficient control planes for service mesh technology. Tools like gloo mesh and meshery are enabling us to configure and manage operations of multiple heterogeneous service meshes for observability and control of microservices operations.

API Gateways and BFF

API gateways are needed to help clients reach the service end points through a single domain and IP address. API gateways also help in protocol translation in the backend while providing a common set of client specific API to the frontend or clients.

The API gateways bring in a monolithic architecture pattern to the distributed architecture of microservices, this causes an issue with scalability and creates a single point failure.

Backend for frontend architecture aims to mitigate the issues with a “one size fits all” API and deploys separate API gateways and backend wrapper for microservices based on the type of client. These frontend client specific API can be used to abstract calls to multiple services in the backend, thus helping to reduce the network traffic and reduce the number of API calls from clients. This is very helpful for mobile clients which work with wireless network and compute constraints.

Continuous Integration and Continuous Deployment

Management of Microservices

In the management of microservices CI, CD, and staged testing are important steps in the deployment process.

The process of CI/CD for microservices creates a unique challenge with different repositories, code integration and deployment cycles for each of the services.

The build, test, and integration has traditionally been managed through Git repositories and Jenkins automation tools. Microservices deployment management and orchestration of Docker containers and Kubernetes pods. These are managed through plugin integrations with Jenkins. Tools like codefresh help to provide the complete lifecycle of CI/CD of microservices including staging and production deployments.

Testing Strategy

The deployment and tests in staging is followed by production deployment. A/B or blue-green deployment methods are followed to ensure stability of the new releases of a service in the staging environment before deployment to production.

Finally, it is important to run Canary tests over the systems for early detection of any issues by opening the API for a small set of customers before the API are exposed to all the customers.

Many large deployments include Chaos testing at regular intervals as part of their processes to induce errors and monitor the system behavior and fix any issues as a pre-meditated step and be ready for any eventuality.

Looking Forward

When working in a dynamic and ever-changing as evolving technology, the best practices will change over time.

The service mesh provides an optimized pattern for deployment of microservices, decoupling them from the operational tools. BFF brings in improved modularity, scalability and failure resilience to API platforms.

It should not be taken that the latest is always best, but that instead, there is a choice for the engineer to choose from different tools when deploying and managing the microservices, and finding what is best suited for the problem, given the constraints and resources available with the development and operational teams.

The possibilities are exciting.

--

--