Welcome To The Era of Micro-services
Introduction to Micro-services Architecture
At one of my recent talks at GDG Ahmedabad DevFest 2017 I was asked a question which intrigued me. The question was — “how will I explain micro-service architecture to a lay men?” After thinking alot about it after the talk I came up with an analogy which couldn’t be any simpler. Imagine if our body only had one organ instead of complex multiple organ system What will happen? How will things work for us? Trust me it would have made lives of doctors less complex! But how will one single organ run all the essential physiological processes (respiration, circulation, digestion, nervous system) required for our survival. For a moment let’s accept it as a hypothesis and try to figure out how will it work and what will be the pit falls for this hypothesis.
Let’s imagine there is one large organ in the centre of our body which has dedicated regions responsible for executing various physiological processes. A region is dedicated for circulatory system, similarly one for our nervous system, digestive system, respiratory system and many more.
How will it scale up? its operations in different situations. For example when a person is running more blood circulation is required in order to maintain right oxygen level in muscle tissues. But is the digestive system supposed to escalate I guess not! But because there is only one single organ every region will face equal wear and tear.
How will it isolate failure? Let’s again try to answer this question with an example. A person has got infection in the circulatory system but as there is one single organ all the other systems will also be affected. And even a minuscule problems can trigger a failure and disruption of physiological processes essential for living.
We can easily see that our hypothesis cannot stand the questions we are raising. Our hypothesis of one organ system works very well for single celled organism. In a very similar fashion monolithic application architecture can only work for less complex applications. For complex applications to scale up, to deliver continuous upgrades and to recover from failures at exponential rate micro-services architecture is the need of the hour.
What exactly is a micro-service?
Let’s first try to define the word service in micro-service architecture.
“Service can be an independently existing logical part of the business domain you are working in”
Martin Fowler
Let’s try to explore this with an example of food ordering App. A food ordering app (Zomato, Swiggy, Foodpanda) connects a user to it’s near by restaurants. The two basic interactions enabled by a food ordering app are — home delivery of food to user and reserving tables at restaurants by user. In a typical food ordering app three clearly visible logical business units are — user management, restaurant management and order management. So they can be separated out as independent services.
Now we have defined services successfully let’s define micro-services now.
“Small Autonomous services that work together, modelled around a business domain and can be independently deployed are defined as micro-services”
Sam Newman
Let’s try to understand sam newman’s definition with an example. Restaurants have two basic big block of informations — Menu Listing (information about the delicious food they serve) and reviews & rating from their regular customers. But both of them can independently exist. Neither of the two are tightly coupled with each other nor with restaurant management service. So we can further normalise restaurant service and spin off two more services — menu listing and reviews & rating service.
“When you further normalise bigger services into smaller independent services. What you get is termed as a micro-service. So micro-service architecture is nothing but a subset of service oriented architecture.”
Shubham Pachori
So now as we have defined the word micro-services successfully let’s dig into the guiding principles of micro-services architecture.
What are the guiding Principles for Micro-service Architecture?
Let’s try quickly list down the principles listed by sam newman in of his talks and then we will explore them in detail
1. Modelled Around Business Domain
2. Culture of Automation
3. Hide Implementation Details
4. Decentralise All The Things
5. Deploy Independently
6. Consumer First
7. Isolate Failure
8. Highly Observable
We have thoroughly discussed the guiding principle “Modelled Around Business Domain” in previous section let’s try to shed some light on the remaining principles.
Culture of Automation: To scale up development and deployment in micros-services architecture culture of automation is really important. When we talk about culture of automation these are the three key points we should concentrate on
- Infrastructure Automation: Can I just write one line of code and provision one or more EC2 instance and docker? Let’s get one step ahead can I have algorithm based load balancer running which can automatically provision new EC2 instances and docker containers running inside them according to the incoming request traffic
- Automated Testing: Whenever I push something into the deployment branch of my github repo automated unit test and integration test must run before the newer version of the service get’s deployed on production
- Continuous Delivery: Are we treating every github commit or checkin as release candidate or not? Does my team really has got the rigour around continuous delivery and integration or not?
Hide Implementation Details: It is one of the trickiest things to ensure in micro-services architecture. Implementation detail of one service must not be exposed to the other one. Let’s try to explore this with our earlier example of food ordering App. Restaurant management, menu listing and review and rating service all want restaurant data. So the easiest way to do that is to share the database.
But that creates a problem you are also sharing the data which is not required by other services. And its really important to understand in what context is the shared data required by all other services. For example menu listing doesn’t care about the restaurant profile it just need to know which restaurant it belongs to. Other really important aspect is each service has different kind of database requirement. Restaurant management requires a geospatial profile of restaurant in order to match near by users and show users the estimated delivery time depending on distance. Similarly rating and review system wants a NoSQL denormalised database which can be consumed by recommendation model to recommend restaurants to a user. So ideal implementation should look like this.
Decentralise All The Things: The main motto behind adoption of micro-service architecture is speed. Speed in scaling up and delivering your product to the customers. And speed comes from autonomy. What do I mean by autonomy? Freedom in designing the architecture of the service to deliver the designated features. Let’s try to address this with an example. Review and rating service wants it data to be consumed by a machine learning model. So they want to stream data to ML Data store using Apache Kafka and use Tensor Flow ML library to train the models. Best support for them is available in Scala and python. So they will go ahead with one of them without thinking about the architecture and framework of other services. What they will make sure is that the basic API contract followed by every other service is also met by them.
Deploy Independently: Each and every micro-service must be deployable whenever required without any dependencies on other services. So what should be my deployment strategy. Its start with a very simple question How is your micro-service mapped with your underlying infrastructure? How many micro-services per host? Host is an isolated operating system which is loaded with all the libraries and dependencies required by the micro-service to run. Host can be a physical machine, a virtual machine and it could be a container. So coming back to the question how many services per host. In starting you may have multiple services running on one host but going ahead it becomes difficult as the resources and dependencies of each service change rapidly and one dependency issue can result into crashing of host which will result into all the micro-services being unavailable. So ideal approach is to have one micro-service per host.
Other important question you need to answer is — How many consumers of your micro-service are you going to break with your new deployment? To answer this question you need to define the response expectation of each request a consumer is going to make to your micros-service. These request-response expectations mapping must be available as test cases and each time you want to deploy your micro-service a test must be run to determine whether all the expectations are being fulfilled or not. But there can be a case where you need to introduce breaking changes but what you cannot enforce is all your consumers upgrading themselves to a newer version of your API all at once. Solution for this use case is version management of APIs you can have to support multiple versions of your APIs being consumed by different set of consumers. GraphQL and Relay is one great combination to eliminate the problems around version of management of APIs which I will discuss in one of my upcoming articles.
Consumer First: In micros-services world you have multiple consumers your frontend and mobile client team and other micro-services team. So the APIs you implement must full-fill everyone’s requirement and secondly there should be an interface where all your APIs are documented and the consumer can test and see the sample responses. One of such tools available in market is Swagger.
Isolate Failure: We all know machine and network have boundaries. They have a failure rate. With micro-services architecture you are actually running your application on multiple machines and thus you have effectively expanded the surface area of failure. What if one of your services is down and the client request coming from mobile require data from three services and the service which is down. What will happen will the request of client fail? Let’s discuss one more scenario order management service requires to call in billing, user and restaurant service while placing an order. And user management service is slow and all the calls to it are timing out but till the one call to it is timing out for that whole period one whole thread of order management server is blocked. Imagine a traffic of thousands of order placement request coming in What will happen? A whole system shutdown is almost certain in this case. What’s the solution to this situation clear straight forward answer to it is not there but one of the mitigation strategies most commonly used now is API gateways and circuit breakers. In upcoming articles I will discuss them indepth.
Highly Observable: When you are running multiple micro-services on production monitoring error logs, failure rates, slow responses becomes extremely crucial. And logging onto each machine and observing server log is not a feasible option. There are mutliple tools available.
- For getting logs and getting access to individual EC2 instances we have rancher.
- For aggregating error logs coming in from different services we have Sentry.
- For monitoring all the requests and their response times we have new-relic.
Keeping in mind all the discussed principles you can easily define the outline for the micro-services architecture of your organisation. In upcoming articles I will start addressing the technical challenges and how to combat them.
References:
- Sam newman’s talk on micro-services architecture https://www.youtube.com/watch?v=PFQnNFe27kU
- Martin Fowler’s talk on micro-services architecture https://www.youtube.com/watch?v=wgdBVIX9ifA
- Chris Richardson’s introductory article https://www.nginx.com/blog/introduction-to-microservices/