Patterns for infinitely scaling & cost effective serverless microservices — Part 1

Orchestra equipment on stage inside theatre house. Photo by Radek Grzybowski on Unsplash

At FundsCorner, we are on a mission to enable accessible fast credit to India’s Kirana stores. This is the first of a series of articles explaining how we built a robust infinitely scaling micro-services architecture & achieved more than 70% cost savings by making them server-less.


Our goal with this series is to enable you with the following:

  • How to build micro-services that scale infinitely & make them work together to fulfil a business objective?
  • A few fast ways of going server-less with micro-services and achieve huge cost savings.

The building blocks:

Our architecture involves three main components that work together:

  • Microservices or simply “Services”
  • Workflow manager, which orchestrates the services
  • The pipe

I will use the example of a food delivery app alongside to describe these components. Below is a sample illustration of flow of events involved in a food delivery app:

Sample flow of events in a food delivery app

Once the order is received, there are a number of tasks to be performed before the food finally gets delivered. Some of these tasks work in parallel & in the background.

As you can see, the complex nature of workflow involved in the above diagram forces you to break the functionality into multiple chunks, working independent of each other — thus bringing in the need for microservices.

“Microservices are self contained functionality units that would have been a module in a monolith world”

The microservices that we create must adhere to the following guidelines for them to be scalable & serverless:

  1. Idempotent: We must be able to run the same service under the same context multiple times without any side impact. Let’s say the service that initiates food preparation with the restaurant did not get a response back in time. The same service must be able to be invoked again to perform this task.
  2. Event driven: The services must be able to be instanced & executed only when a particular event occurs.

In the above food delivery flow, just breaking the functionality into multiple independent microservices is not enough; some of the services are dependent on others to complete before they start their work. For example, if the food asked for is not available in the restaurant, there is no point in assigning a trip to the delivery captain — this brings in the need for an orchestrator.

Micro-services require a common orchestrator to sequence the flow of events for fulfilling a business objective. We call this component “Workflow Manager”.

A Workflow manager is a component that orchestrates services based on a Workflow definition (a.k.a blueprint).

  • A workflow definition is a declarative configuration where you specify various stages of a workflow, tasks that have to be orchestrated inside each stage & the sequence in which the tasks have to be accomplished.
  • The stages of a workflow fulfil a particular “part” of a business objective. The stages are strictly sequential. I.e. a stage must complete before the next stage starts. You may also want to call a stage as a “Saga”, but that is for another article.
  • The tasks inside a stage can be “sequenced” as either independent (or) dependent tasks. Dependent tasks have an additional attribute called “parent tasks”. A task can be dependent on one or more parent tasks before it can be scheduled.

That’s it! With the above basic pattern, you can model any complex real world workflow (be it food delivery (or) a loan application) into a workflow definition!

Let’s quickly see the microservices & the workflow manager together in action:

Workflow manager & microservices together in action
  • A real life event such as ordering of food (or) submitting a loan application triggers a workflow.
  • When triggered, Workflow manager looks up for a blueprint, makes an instance of the blue print and finds out which tasks have to be scheduled & schedules these tasks for the current active stage.
  • The tasks perform their job & report back to workflow whether they completed or failed.
  • When a task reports completion, the workflow manager again finds out which tasks have to be scheduled & schedules these tasks. If there are no tasks pending for a particular stage, the manager activates the next stage.
  • When a task reports failure, the workflow manager simply registers this result. If there are any dependent tasks for this task, they have to wait until this failed tasks gets completed. More on how to recover from these failures in the coming articles…
  • If there are no more stages to work on, the workflow is declared complete!

What is interesting about the above approach?

The workflow manager is itself a micro-service & event driven. It acts only when triggered — there is no need for the workflow manager to run on an infinite loop — so you can go deploy a serverless workflow manager.

To quickly summarize, I have defined the micro-services & the workflow manager that orchestrates these services based on a defined blueprint. Now, you must be wondering how the micro-services and the workflow manager talk to each other.

Stay tuned to our next article in the series where I will introduce our hero, the pipe that binds together the workflow manager and the services!