To begin with, in an event-driven microservice architecture, services communicate each-other via event messages. When business events occur, producers publish them with messages. At the same time, other services consume them through event listeners.
Thus, the main benefits of event-driven systems are asynchronous behavior and loosely coupled structures. For example, instead of requesting data when needed, apps consume them via events before the need. Therefore overall app performance increases. On the other hand, keeping coupling loose is one of the main key points of a microservice environment.
Event-Driven Architecture as a Solution
As well as you can build your systems with event-driven structures, you can also use it as a solution to your already built highly coupled environments. Let’s discuss how we can apply the event-driven approach as a solution.
A Basic REST Driven Approach
Even though your application may work properly, these are the downsides:
- Module 1 waits for the response
- Module 2 can be down
- Network delays decrease performance
- If data is huge, it will paginate. This means more REST calls
- Module 2 can be under heavy load and can respond very late
When your system becomes less efficient because of synchronized connections, you can apply the event-driven solution.
Real-World Scenario: How Our Team Applied It
In Trendyol/Marketplace team, we have a reporting application (GIB API). It transmits all sale reports to the government. So, this app has to fetch all the sale data from another API. In the beginning, the transaction volume was very low. Thus, we have quickly built the API with the REST approach.
In spite of the low amount of data at the beginning, it increased up suddenly. Because Trendyol is a fast-growing company, we often face this problem. On the other hand, the solution is simple: converting to event messaging.
Solution 1: Converting to Event Messaging
As soon as we realized that the reports are not being generated efficiently, we applied the event-driven solution.
Above all, our plan was simple:
- Publish an event when a transaction item created
- Fetch the related data when event received
- Convert to a piece of report string
- Persist in the RDBMS (PostgreSQL)
- Query data when generating the report
- Concat the string data and persist as a file to disk
As a result of this, the needed transaction items are persisted in the Reporting API. As soon as report creation starts, it queries and concatenates the report data from the RDBMS.
Success doesn’t Come Easy
While we converted the sync process into an async architecture, the transaction API faced another performance issue. Because the reporting (GIB) API requested the detail every time a transaction item created, the transaction API went under a heavy load. The reason is, the transaction records are created for every item sold in Trendyol. So, the huge number of transaction item detail requests choked the API.
Solution 2: Fat Event
To explain, a fat event means that the message contains the detail with the entity identifier.
After converting the message into a fat event, we didn’t need any additional REST calls. As a result of this, our architecture became a complete async event-driven system.
Solution 3: Outbox Pattern
While we are talking about the sale transactions, it is already clear how important these data. Because they are about financial business. Thus, the calculations must be correct 100%.
To be able to access this accuracy, we must be sure that our system is not losing any event messages. As a result of this, we applied the outbox pattern.
What is the outbox pattern? Simply, when your API publishes event messages, it doesn’t directly send them. Instead, the messages are persisted in a DB table. A job sends cumulative messages in predefined time intervals.
To explain the figure:
- Business module publishes an event
- Event service persists the message in RDBMS
- Scheduler service triggers the job “Send Event Messages”
- Event service queries the cumulative event messages
- Event service publishes the messages via RabbitMQ
Let’s list down the pros and cons of the outbox pattern.
- Event messages first persisted in RDBMS. ACID properties of transactions guarantee the persistence.
- When an event is lost, the message can be checked from the DB.
- A lost event can be recovered from the RDBMS efficiently.
- Increase in complexity.
- Delay in publishing events.
- To publish a basic event, at least two technologies are needed: Storage System and Message Queueing Protocol.
Benefits of the Event-Driven Microservice Architecture
- Loosely coupled structure
- Complete isolation of the microservices
- No synchronous REST calls
- Asynchronous event-driven functionality
- Performance gain
Among all of them, the most important benefit is the first one. Because we want to separate the components by microservice architecture, all of the units must be separated enough (loosely-coupled). Otherwise, microservice architecture won’t work and your system will turn into a distributed-monolith.
Single point of failure: If your RabbitMQ faces any issues during the production processes, your whole system will also fail.
To overcome failures:
- Construct your RabbitMQ as a cluster
- Create your queues durable
- Publish your messages persisted
As a result of this, you can quickly recover any failures. In the time any error happens, your other instances in the cluster will take the work over and recreate the durable queues. Also, your persisted messages will be recovered from the disk.
Duplicated event messages: An event publisher API can face trouble and resend the same messages. To resolve any duplication in the system, any consumer endpoint has to be idempotent: always consider to check first if your API acquired the event before.
In microservice architecture environments, we have to keep coupling low. To be able to keep the coupling low, we have to focus on the connections between modules. One way to do this is to use event-driven approaches.
In the meanwhile, direct REST calls are expensive. The destination API can be out of service. Additionally, the source API has to wait until the response is received.
To create an event-driven microservice structure, we can simply create a RabbitMQ cluster with persisted messages. All needed events can be published via the service-in-responsibility. Also, all the other services can bind their consumers and process their works when event messages are sent.
While building event-driven systems, we can consider fat events. Fat events provide all the needed data when the event occurs. As a result of this, the APIs don’t need any additional external calls.
On the other hand, there can be lost events because of a system failure or a network brake-down. To be sure that all events are published and consumed successfully, the outbox-pattern can be applied. Simply, the events are stored in a storage system instead of publishing them directly. After that, a configured job sends the events at definite time intervals. The lost messages can be recovered easily via the storage system.
To sum up, the microservice architecture is quite new and we, all the developers are learning it better every day. Whenever we are not careful, our system can turn into a distributed monolith and this is the worst case. Because you can’t gain any benefits as well as you have to deal with the complexity. Above all, keeping coupling loose with event-driven architecture is one of the most important things.
If you want to learn more about the RabbitMQ please follow this link.
Finally, if you like the post, please like it and share it. Also, please don’t forget to read my other post about the Trendyol Scheduler Service.