A simple, scalable architecture for an Event & Notification engine

Gourav Suri
Credit Saison (India)
4 min readMay 31, 2020

Introduction

Credit Saison (India) provides capital to Corporate, MSME and retail consumers. This lending process involves multiple stages in Loan Origination to its disbursement. Every stage functionality is being handled by different micro-services which provide the required process to complete relevant tasks. In a very common business application scenario, understanding what particular stage a loan entity is in becomes critical for efficient processing. So, how to know at which stage a loan entity is in?

Events to the rescue…

Events are significant occurrences of actions that get triggered when a loan application moves from one stage to other. These events signify what tasks may need to be performed downstream. An example of an event in our use case is whether the loan applicant(s) have gone completed the process of a regulatory check with a desired outcome. We have multiple events either preceding or following this.

How & Where to store these events?

While there are many ways to accomplish this, our micro-service architecture required us to send events to a centralised location. We push all our events to a single queue which we call the ‘Events Queue’. So now, every service has the ability to be a producer of these events.

Interesting Fact: Type of Queue used here is FIFO (First In First Out) in nature to maintain the order sequence of all stages being consumed at next level.

Now here comes another challenge, the piece of code to push certain data-points of a loan application to a queue is replicated for all services. We wanted to ensure that there is a consistent method that every service could inherit to push out events. Also, to give us the flexibility to grow our use-cases, we’d need a single source of code.

To overcome this, we have created a custom defined artifact java library to emit these events. Think of this java artifact as being similar to adding something like google analytics code to your web pages.

We have another (events) micro service specifically designed to wrap all our events information. This is responsible for consuming all events from an Events Queue & pushing it into DynamoDB & Elastic Search.

Bonus Point: By introducing queue in between events & our other services, we managed to make inter-service communication asynchronous so that every service can perform at their own pace.

Why DynamoDB & Elastic Search?

We did a DB Performance comparison between DynamoDb, PostgreSQL, InfluxDb & Elastic Search based on following metrics.

Storage Speed is calculated on adding 100 million records.

To conclude, we’re using DynamoDB for storing all the events considering it’s storage speed & Elastic Search for performing index search on all the events.

Notification into the picture!

Once these events are stored in the desired order, the next step is to inform the required service or user about intent. An example — We’re notifying our business user about which stage the loan application is via email/dashboard so they can continue with the desired action.

As part of the events micro service mentioned above, massaging of events information is done prior sending it to a new queue (Notification Queue).

Now, there is another Notification service which is responsible for sending notifications via different channels. Channels may include email, sms, push notifications, slack, etc. This particular service is essentially consuming all messages from Notification Queue & sending it to SES (AWS Simple Email Service). Thus sending the email notification to a target address specifying details of a loan.

Quick Architecture Recap

Let us know your thoughts. Happy Architecting :)

--

--