Microservices: Deep dive part-II
Prerequisite:
Before coming here do read:https://medium.com/system-design-concepts/microservice-deep-dive-c100f641db73
INTER MICROSERVICES COMMUNICATION:
How one microservice talk with another microservice?
Let’s say we have three microservice, order placing, merchant, and billing. While placing the order, (as we can see in the above image in step 2) we need to generate a bill for which this microservice calls billing microservice over HTTP/RPC call then it will update the order table and finally informs the merchant by calling merchant microservice. so here we can see synchronous communication is happening between these microservices.
Here as we can see a client is requesting an API gateway which in turn forwards the request to microservice-1 which forwards the request to microservice-2 and gives back the response. This is the synchronous call. The advantage of this type of call is that it’s easy and in realtime. The disadvantage of making this kind of call is service availability, say for eg, we are calling microservice-1 which is calling microservice-2. Here microservice-1 expects microservice-2 to be available. If microservice-2 is down, here we won't get any response, we need to handle such situations and it violates the main purpose of going to microservice architecture i.e we expect microservice to be independent. This behavior results in tight coupling. Another disadvantage is the increased time in response time, what if microservice-1 calling microservice-2 which is calling microservice-3 which is calling microservice-4, and so on..there’s latency added to every call resulting in lots of latency.
Let again consider the same situation as above but this time microservice-1 instead of making an asynchronous call to microservice-2 and then wait for a response. Microservice-1 puts the message in the message queue of microservice-2. It is always good to have an async way of communication. Now they are not tightly coupled. Even if microservice-2 goes down, it can send the msg to the queue and loads the messages from the queue when comes up and acts. The advantage is fast API calls as we are not waiting for a response, services are decoupled (microservice-1 is not affected by the availability of ms2), no need for service discovery(microservice-1 doesn't really care about network address of microservice-2, all it need to know about is a queue). Disadvantages include complicated design, process latency (if the queue gets overloaded and service is not scaled), and monitoring costs.
CIRCUIT BREAKING:
Consider a situation where the API gateway is calling ms1 which sends a request to ms2 which then forwards the request to ms3. Let’s consider a situation where ms3 died/overloaded, there’s some mechanism at ms2 end to retry for several no. of times(let’s say 5). Here we know ms3 is not available still ms2 is overloading ms3 with requests which is bad. Even if there’s no retry mechanism on ms2, maybe the client keeps on sending requests to ms3 by performing the same operation again and again and not letting ms3 recover.
To cope up with this we use circuit breaking.
When ms3 returns the response to ms2, the circuit breaker cached that response. So instead of sending the error message,ms2 will send the cached response i.e the last response when the service was up and running. What else the circuit breaker does is instead of sending cached response if ms3 is down then redirect the request to some other service until ms3 recovers. This is known as a fallback mechanism.
How ms3 recovers? By deploying a new instance, or by finishing all the messages in the queue.
After timeout what happens? After a timeout when a request appears at ms2, it tries to connect ms3, if ms3 gives back the response circuit breaker understands that now ms3 has been healed and it won’t send any cached response and everything comes back to normal.
SERVICE MESH:
Challenges we need to handle while communicating with microservices?
- Load balance: if our microservice has multiple instances running then how do we do the load balancing i.e with which specific instance API gateway is going to connect?
- service discovery: service registry will actually have the latest network address of our ms, how do we decide which one to connect? who will decide that?
- Metrics: we are making calls from ms1 to ms2, we need insights of request-response time, the total request we are making, and many more. we need to have a system that can collect all these metrics.
- Retries: who will make sure that our ms1 is actually retrying to reconnect with ms2?There should be one place where we can configure the standard time for all timeout, retires and circuit breaking as well.
That’s where a service mesh comes in.
With every ms, we need to deploy service mesh as well. if there are n no. of microservices then we need n service mesh which is running parallel to every ms. This pattern is known as the sidecar pattern. Service mesh solves all the above-mentioned problems and also acts as a proxy. Whenever ms1 calls ms2, ms1 calls(rest call) service mesh which calls ms2. So in our code, we don’t really need to worry about load balancing and service registry or log the metrics when the request started.
DEPLOYMENT STRATEGIES:
We desire the following properties of microservice while deploying them
- Scalability and throughput
- Reliable and available
- Isolation
- Resource limit
- monitor
- cost-effective
To achieve the above-mentioned properties, we have different deployment strategies use widely among industry people.
1. Multiple servers per host
As the name suggests, here multiple microservices are running in one server. usually, these services run as a separate process of os. It’s a very traditional approach.
Advantage:
- Efficient resource utilization: Even if s1 is not getting too much load,s2 may be getting lots of traffic and utilizing all the resources.
Disadvantage:
- Poor isolation: The performance of one service can affect another deployed in the same machine. One service ends up hogging more resources and not let another service to use them.
- No resource limit: We cannot set a resource limit on any service.
- Dependency conflict: Library version conflict among service.
2. Server per VM/container
Here as we can see, we have one service deployed on each VM/container.If we want to scale out in case of huge traffic we add one more instance of server , we already had an image build for that particual microservice, we take this image and deployed it to scale our system up.
Advantage:
- Isolation and secure
- Autoscaling
- Fast(container only)
3. Serverless
In this kind of deployment, all we need is to care about our code. we don't need to manage server at all. Major cloud provider provides this functionality. AWS calls it lambda. We need to log in to the console and configure our code in lambda. This function can trigger on server events like if someone is updating the image in a s3 bucket or on a rest call (Get/put/post).
Advantage:
- Focus on code.
- pay as u go
- no need to worry about scaling
This was all about the microservices.
Many more such blogs about to come.
Meanwhile, stay tuned!!