From a big box to small connected blocks: experience of migration to a microservices architecture

Santhosh Kumar
affinityanswers-tech
4 min readDec 23, 2022

At AffinityAnswers, our flagship product, Intersect, was a legacy application that is critical for business. Over a period of several years, many features were added and some were deprecated according to business needs. Sometimes we the engineering team were not even confident answering some queries in meetings with Product team because of the mammoth architecture of the application. Some of the questions or requirements are listed below.

FAR ( Frequently Asked Requirements )

  • Is your API scalable independently with more resources added if a new customer signs up?
  • How confident are you about the SLA of an application?
  • Can multiple developers can be assigned to introduce new features in a faster phase?
  • Why is the downtime so “long” for deployment of simple bug fixes?
  • Can we make use of new technology for a small module of the application?
  • Few customers want to use only API instead of the application. Can we support this with scaling by not disturbing the usage of existing customers of the application by adding limited resources?

Monolithic to Microservices Architecture.

The existing architecture of the application is monolithic, all components (API, Backend, Frontend, and Databases) were running in a single large server.

Monolithic Application on a single server

After many internal debates, we decided to migrate to microservices to reverse a massive technical debt that had created severe stability issues. Since components were interconnected, bugs in one part of the system could bring down our whole system. The team began to break things into smaller services to better handle scalability and quality assurance

“A monolithic architecture is a choice and a valid one at that. It may not be the right choice in all circumstances, any more than microservices are — but it’s a choice nonetheless.” — Sam Newman

Break the monolithic application into microservices

We broke the application into several interconnected services and push each service’s image to an Amazon Elastic Container Registry (Amazon ECR) repository.

credits to AWS Doc

What Is a Container?

Containers allow you to easily package an application’s code, configurations, and dependencies into easy-to-use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control. Containers can help ensure that applications deploy quickly, reliably, and consistently regardless of the deployment environment.

Architecture Overview

The final application architecture uses Amazon Elastic Container Service (Amazon ECS) and the Application Load Balancer (ALB).

credits to AWS Doc

a. Client
The client makes traffic requests over port 80.

b. Load Balancer
The ALB routes external traffic to the correct service. The ALB inspects the client request and uses the routing rules to direct the request to an instance and port for the target group matching the rule.

c. Target Groups
Each service has a target group that keeps track of the instances and ports of each container running for that service.

d. Microservices
Amazon ECS deploys each service into a container across an EC2 cluster. Each container only handles a single feature.

Why Microservices?

  • It will isolate the crashes of applications i.e if one micro piece of your service is crashing, then only that part of your service will go down. The rest of your service can continue to work properly.
  • We can isolate the security for each service. When microservice best practices are followed, the result is that if an attacker compromises one service, they only gain access to the resources of that service, and cannot horizontally access other resources from other services without breaking into those services.
  • We can independently scale the service instead of mounting a large server. For example, if we want to enable API as a service to external customers, only API service can be scaled horizontally.
  • Development can be faster, multiple developers can be assigned to each module. New entrants can learn one module and features can be added faster.

Hurdles along the way

Once you start the microservice paradigm, unique issues arise, such as handling communications among microservices, addressing failures, and debugging problems.
We had to introduce new integration and end-to-end testing methods and foster a new internal culture of deployments.

How microservices affected business

From an engineering perspective, it simplified the development process and quality. Deployment is made easy using AWS-managed services with best practices.

On the other hand, post-redesigning observed that a few initial requirements have not been used extensively, like the need to use API independent of the application. But this micro-services architecture was made sustainable for future business growth.

Because of isolation in nature, new features can be added much faster. Depending on the feature complexity implementation is reduced to 50% of the time. We can develop, test, and deploy faster.

Migration to Microservices architecture took one month with three developers including production deployment.

Finally, an advice for others

Microservices are not a silver bullet for every problem. The benefits from microservices must outweigh the time spent in migration and maintenance for new issues. And staying with a monolith is a much easier task than handling a distributed system for small-scale applications.

--

--