Migrating from a Monolith to a Microservices Architecture

Lately, there has been a lot of buzz around the concept of microservices. While this architectural style provides some advantages, it can be daunting for engineering teams looking to move over from the traditional monolith. Over the past couple of months, we have been doing just that at Andela in an effort to scale our infrastructure. In this post, I will share how our team is going about it and hopefully help provide some insights for others looking to make the shift.

Identify the bounded contexts

Before attempting to migrate to a microservices architecture, we started by clearly defining the bounded contexts of our application. These business boundaries are what will drive the implementation of your microservices. Luckily, we found that it was easier to identify business boundaries of an existing system than it is of a greenfield project. And if you have been maintaining a modular monolithic codebase, the process should be even more trivial.

Separate your Frontend from your Backend

This may seem like a trivial step, but I am sure there are still plenty of applications out there that couple their Frontend and Backend code together in the same codebase. This was the case for a lot of our applications until we eventually split up our frontends and backends a few months. At this stage, your application should look something like this:

Introduce an API Gateway

When you choose to build your application as a set of microservices, you need to decide how your client applications will interact with the microservices. With a monolithic system, all endpoints are hosted by a single server. In a microservices architecture, however, each microservice exposes a set of what are typically fine-grained endpoints. While a direct communication between your client and the multitude of microservices is not ideal, one approach is to implement an API Gateway that is a single entry point for all clients into the system. As a next step, we introduced an API Gateway which for now simply proxies all of the client requests to our monolithic backend.

It is important we build the API Gateway using a language/platform that supports asynchronous, non-blocking I/O to ensure it can scale to handle high loads. Some options are Golang, JVM based libraries and Node.js. We decided to use Golang because it is highly concurrent and lightweight.

Split the monolith…Incrementally

At this point, we were ready to start separating our monolithic backend into a set of loosely coupled, independent services. We made sure we had a modular monolithic backend which made the process a lot less painful. We had to spend some time refactoring our existing monolithic backend to make it more modular. Instead of embarking on a big-bang rewrite, we decided to chip away at the system. An incremental approach helped us learn about microservices as we went, and limited the impact of getting something wrong.

We started by identifying parts of our codebase that formulate bounded contexts (i.e. modules in the monolith). Also, we identified ways to split our database by looking at which parts of the code read from and write to the database, inspected the database mapping code (e.g. the object relational mappings), and inspected the database-level constraints (e.g. foreign key relationships). All this helped us understand the coupling between tables that may span what will eventually become service boundaries.

So we have identified bounded context in our application code and how to split those out in the database, now we can incrementally separate our monolithic backend.

Everytime we spit out a new service, we update the API Gateway to serve all client requests appropriately by forwarding them to the proper services.

Microservices at scale

When operating at a large scale, things get a bit more complex dealing with microservices. Here a few things that are helping us deal with scaling our architecture:

  • Build for failure

What happens when we want to handle failure of multiple separate services? What happens when a service is responding slowly or is unavailable? Consider setting requests timeout, returning partial results, returning cached results, or implementing a Circuit Breaker pattern in your API Gateway.

  • Monitor everything

Tools like zipkin came in handing for distributed tracing of requests travelling through multiple services. Also, it is imperial to collect key metrics about the health of your services. We collect data using statsd and throw them at cloud monitoring tools. If you are hosting your services on Heroku, here is a open source ELK (Elasticsearch, Logstash and Kibana) log drain developed by Andela developers you could use to centralize your logs. Oh, and SETUP ALERTS! The graphs are nice, but we need to know immediately when something goes wrong.

  • Automate everything

We leverage continuous integration, deployment and delivery as much as possible.

  • Test everything

Unit tests, integration tests, acceptance tests, load tests, etc.

The journey to a fully microservices based architecture is a long and challenging one and we are not quite there yet, but we continue to learn and improve on our processes each day. We are still breaking up some services to truly be micro. Once we do, we might push the boundaries even further and investigate a completely serverless architecture.