Introduction To Service Mesh

Arjun RC
4 min readJun 18, 2019

--

Is software development that easy nowadays? Everyone is now following or are migrating to microservices architecture. A single monolith is now broken into pieces. Then came the following headaches.

a. Service Discovery, Load Balancing
b. Fault Tolerance
c. Monitoring And Tracking

But, We came up with the solutions for the same. Thanks to Netflix, Cloud Native Cloud Foundation, Spring and other open source projects.

A Typical Microservice environment looks like,

The above architecture solves most of the major problems that we faced.

a. Eureka- Service Discovery
b. Ribbon- Client Side Load Balancing
c. Hystrix- Circuit Breaker
d. Zipkin- Distributed Tracing
e. Prometheus- Monitoring
f. Grafana- Data Visualization

These solutions will be good only if our entire environment is made up of a single stack (Eg: Spring boot Apps). What if the environment is a;

a. Multi-Stack Environment
b. Mutli-Framework Environment
c. Polyglot or A Legacy

Now we are screwed up. The issues mentioned earlier are actually non-core things, where we are spending way too much time working on these peripheral concerns. We should make the software development easy.

If we look at any microservice based cloud native environment today, its kind of complicated; clouds, elastic services, containers, serverless, virtual machines, our own code, blah blah… it gets to be much. Writing code for such an environment or operating such an environment is not that easy. Someone used to say that when he moved from monolith to microservices, every outage could become like a murder mystery. Now the question is how can we solve these mysteries. The solution is Service Mesh.

Service mesh is basically an infrastructure layer for service-service communications. Mr. Arijit Mukherji, CTO of SignalFX, gave an interesting definition to it, Service Mesh is a happy marriage between proxy and service. A typical service Mesh looks like following.

Here, in service mesh, we add a layer 7 proxy with each microservice instance, which means now the service-to-service communication takes place through a proxy. Now this proxy intercepts the request sent from service A, load balances the request to destination proxy, and from there to service B. So proxy is like an assistant who is taking care of all the communication, he does everything on behalf of microservice. This pattern is also known as sidecar patterns. Now as a developer, We do not have to deal with any aspect of the communication. Following are few operations that can be handled by the proxy.

a. Dynamic Service Discovery
b. Load Balancing
c. TLS termination
d. gRPC Proxying
e. Circuit Breakers
f. Health Checks
g. Traffic Split
e. Fault Injection

As mentioned above, the requests are now flowing through the proxies to corresponding services. And this path is coined as the Data Plane. And we also have a policy layer that controls the behavior of these proxies and this part is coined as the Control Plane. Here, we generates and pushes the relevant configs to all proxies. The main advantage is that we can dynamically reconfigure the proxies based on the feedback that we get from the entire ecosystem. Yeah, that’s it !!! We are done. This is the fundamental concept of the service mesh.

And why is this service mesh such a powerful concept, it is because this mesh allows us to have intent driven operations. Lets see how service meshes works according to our wish.

a. Traffic control- We can enforce routing rules and policies through the proxies.
b. Resiliency- Since the communication is happening between proxies now, it can detect when a call fails, which means it can retry it its own without our intervention.
c. Load Balancing- In service mesh, now everybody gets a load balancer(proxy acts as L7 load balancer), ie; every microservice to microservice interaction can be now load balanced. And load balancing is not considered as an edge concern now.
d. Security- Here, the environment can now transparently encrypt all service-service communication and enforce mutual TLS.

These all can be done using policies and rules within the service mesh without developers having to deal with it. And we can say that service mesh is just like an AOP for microservices. Following are different implementations of the same.

Since the service mesh address the real use cases and challenges, organisations must plan for a service meshed future. Other than the above mentioned potentials, service meshes can also help with code deployments, testing, operating our environment etc. It enables policy driven development, which makes the development easier.

Obviously, mesh is not that perfect it has pitfalls. Great power comes with great responsibility. It is easy to configure the meshes. But we should be careful when making configurations. Better workflows, change management will help here. Some issues such as memory leak etc happening to the mesh will affect the entire system. Security will be compromised if the attacker get through to mesh. Since we have lots of interactions happening now, it can also result in higher latencies.

References

a. AWS re:invent 2018: Fully realizing the Microservices vision with Service Mesh(DEV312-S)
b. Making Microservices Micro with Istio Service Mesh by Ray Tsang

--

--