Scaling with Microservices? Here’s Why Service Mesh is a Must.

Shouvojit Sarker
agiledigital
Published in
8 min readNov 16, 2020

Learn how a Service Mesh offers better insight and control over microservices as they scale the data flow of your organisation.

Wires connecting different parts of a system
Photo by John Barkiple on Unsplash

Making decisions in system design is all about trade-offs, and microservice architectures give us lots of trade-offs to make.
Sam Newman, Building Microservices: Designing Fine-Grained Systems

Polls suggest around three-quarters of organisations are implementing microservices to power their digital enterprise. While the potential benefits of microservices are well established, enterprise developers sometimes overlook serious drawbacks in operating this kind of distributed architecture.

In this article, you’ll learn about potential drawbacks with microservices and how Service Mesh patterns and technologies can address these. Along the way, we discuss what a Service Mesh is and how MuleSoft combines a Service Mesh with its enterprise API and data service management technology.

Drawbacks of Microservices

We often see two common pitfalls in the implementation of a microservices architecture. Organisations either overlook issues inherent in the microservices approach; or address these issues separately and repeatedly for each microservice implementation.

To use microservices successfully, some questions that need to be answered and solved holistically are:

  • How will data move between different microservices?
  • Should a microservice allow anyone to send requests to it?
  • How do you ensure that communications are secure (e.g. encrypted)?
  • If there is an error, how do you know which service was the source of it?
  • Should scaling up solely be determined by an increased number of requests and automatic worker provision?

In short, we need to understand and control the movements of our data at scale. We also need to enforce policies that limit our risk profile without creating more work for our developers. This is where a Service Mesh brings value by simply and effectively governing all data exchanges between our microservices.

So Service Mesh solves these issues?

Yes, Service Mesh works well to treat the enterprise concerns of data security, observability, scalability, and discoverability in a microservices architecture.

The core idea of a Service Mesh is to manage and understand how data is passed from one service to another; this can include whether data is allowed to pass from one service to another, and how many requests can be made to a particular service with rate-limiting.

A Service Mesh is thus a way to control and monitor how distributed online services share data. A typical Service Mesh will provide support for traffic management, security, identity, authentication and observability. The policies used to govern these aspects are deployed in a shared infrastructure layer which simplifies the application layer for developers.

Security solution with Zero Trust Model

A Service Mesh can be configured to operate on a Zero Trust network model. Zero Trust is a security concept based on the principle that an enterprise should not automatically trust anything inside or outside its perimeters. Rather, anything trying to connect to a data service must be verified before access is granted. Some installations of a Service Mesh come with a Zero Trust security model out of the box.

This security model avoids the old castle and moat mentality where organisations presume that everything inside their perimeter is not a threat and is cleared for access. As a given microservice can have different addresses and belong to various networks, the traditional network-level access control is not sufficient and all communications being encrypted is desirable. This also brings in the concept of governance, who gets to access what depends on the policy of that particular service, and users are only granted a minimal amount of data required to perform specific tasks. Policies on Service Mesh thus play a critical role for security and is a great way to implement a Zero Trust Model on a distributed system.

Language agnostic tracing

Monitoring a standalone service is straightforward. However, observability can be very challenging across a distributed services architecture. With multiple services, understanding how a request is fulfilled, pinpointing performance bottlenecks and failures get progressively more challenging. A Service Mesh manages observability by consistently supporting transaction tracing across these distributed systems, irrespective of the language or framework used to build any particular service.

Manage traffic as it scales

Scalability is potentially one of the largest benefits of microservices. A properly designed microservice will support running many copies at the same time (scaling-up), sharing the load of handling requests between them. If the number of requests decreases, the number of copies can also be decreased to save resource (scaling down). This is called horizontal scaling.

Horizontal scaling brings traffic management problems with it. Service Mesh alleviates this problem through traffic management policies. For example, based on conditions, a request can be routed to a specific worker for a service, limits can be set on the number of requests. Other traffic-related user rules can also be implemented.

Automatic discovery of services

With an increased number of microservices in an enterprise, the act of discovering a required service and manually connecting to it becomes untenable. Rather than relying on the manual discovery of online services, Service Mesh enables services to be able to find each other, either by IP address or service name. Participating services can then establish a secure connection automatically. This improves service discovery, particularly in an environment where new services are being added frequently.

How Does Service Mesh Work?

General Architecture

A typical Service Mesh architecture has two main components, a control plane and a data plane.

Generic Service Mesh Architecture
Generic Service Mesh Architecture by MuleSoft

Control Plane

The control plane of a Service Mesh is where shared rules are defined, and from where policies and security are enforced. This would usually be used as a point to configure, access and tweak the Service Mesh to suit specific use cases.

Data Plane

The data plane is where the actual action takes place in a Service Mesh. This is where service requests are routed between microservices through proxies in an infrastructure layer (called sidecar proxies). Policies are applied to this plane, traffic between pods are secured, routed and managed. The data plane also provides traffic metrics and tracking information.

Anypoint Service Mesh: MuleSoft’s Implementation

Anypoint Service Mesh takes advantage of Istio’s custom adapters to enforce policies from Anypoint Platform. It can also export metrics from Istio to Anypoint Platform. In practice, it replaces the control plane of Istio (Istiod) with Anypoint Platform and uses a custom adapter to facilitate communication between the control and data plane. This simplifies configuration, deployment and management of Service Mesh, as all business logic, tasks, policy and security management can be performed on Anypoint Platform. Also, it can facilitate communication between mule applications and non-mule services.

Anypoint Architecture

Anypoint Service Mesh Design
Anypoint Service Mesh Design by MuleSoft

The Service Mesh in Anypoint Service Mesh is an independent architecture layer in a Kubernetes cluster. Instead of the services communicating directly with each other, a sidecar proxy is used to perform that job. Anypoint Platform communicates with these sidecar proxies using MuleSoft Adapter to enforce policies and collects analytics so that they can be managed and observed in the Anypoint Platform layer. This is how non-mule microservices from the Kubernetes Cluster are exposed as APIs into the Anypoint Platform. Envoy filters do the authorisation of requests, and then requests are sent directly to the adapter. The adapter also enables all services that are managed by the mesh to share metadata. This, in turn, enables existing microservices to be recognised as APIs by Anypoint Platform. The metadata is used by Anypoint Exchange to discover and create APIs for each service. These APIs can then be managed with API manager. Anypoint Monitoring similarly uses metadata to get information from sidecar proxy and use it for API analytics.

Flow

Request flow in Anypoint Service Mesh
Request flow in Anypoint Service Mesh by MuleSoft
  1. The client sends an Ingress request to the service.
  2. The request is captured by envoy and sent to the adapter using envoy filter. Policy check, verification and authentication happen at this stage.
  3. In case of no violations, the request is then sent to the microservice.
  4. Service logic is run in microservice, and a response generated and sent back to the client.
  5. The adapter after regular intervals communicates with Anypoint Platform asynchronously to get the latest policies and contracts.
  6. The adapter after regular intervals returns API analytics information to Anypoint Platform.

When should it be used?

  • If there is a need to connect services running on the MuleSoft runtime engine with services running outside of the MuleSoft runtime engine in a Kubernetes cluster.
  • If there is already a deep understanding and expertise in Anypoint Platform’s policy and security management.
  • Organisations currently using MuleSoft wanting to explore and slowly move towards microservices running somewhere else.
  • If it isn’t practical or desirable to rewrite your existing services into MuleSoft, but you want to export management tasks to Anypoint Platform.

Collaborate and scale, no matter what

The MuleSoft development philosophy uses a Centre for Enablement model (C4E). This is about allowing developers within an enterprise to self-serve, scale and reuse resources all while benefiting from sound governance. The C4E model is closely related to the concept of managing a microservices architecture as it consists of independent and scalable services that perform specific tasks and are decoupled from one another. The typical microservices related challenges arising from this approach are addressed by using Anypoint Platform.

But what if the APIs aren’t developed using MuleSoft? Different groups in an organisation may use different technologies. This is where Anypoint Service Mesh comes in.

Anypoint Service Mesh can take a microservice implemented in any technology and then treat it as a MuleSoft API in Anypoint Platform. Once the corresponding API is created in Anypoint Platform using Anypoint Service Mesh, it can be discovered, managed and secured using existing enterprise dashboards.

So what’s next?

Service Mesh is one great way to resolve the challenges that arise when using microservices at scale. If you have skills and expertise with MuleSoft, then Anypoint Service Mesh is a great tool to use to manage services, no matter how they are built. If this article interests you in learning more, setting up a sample Anypoint Service Mesh installation would be a great next exploratory step.

That’s all for now. I hope this lesson helps your team save your microservices architecture from becoming a maxi-problem!

--

--

Shouvojit Sarker
agiledigital

A regular techie currently too involved with spatial data.