Strategy of splitting a Monolith into Microservices

Aram Elchyan
Picsart Engineering
6 min readMar 4, 2021

Most of the time, when we develop new software, all we care about is building as fast as possible. Why? Well, because it’s the only way startups can compete and survive in the fast-paced tech market. The downside of this is that it limits us in terms of software design choices, and we usually end up opting single-tiered “Monolithic” applications.

While working with Monoliths is simple when the development team and the application itself are small, it can become a real headache as you grow. You guessed it right. We had to face a situation like this at PicsArt too since we had this Monolithic backend application.

We had a single-tiered application that was responsible for the functioning of most of the backend logic for PicsArt. The logic was intended for different business units and/or different domains. It was all in one place. This resulted in a single point of deployment for different business units, which started to slow down the development process for all of us.

Finally, we concluded that we have no way of ignoring this problem, and, apparently, we had to make a transition to Microservices.

So what was our goal

At the end of this “transition”, we wanted to get a set of multiple, loosely-coupled, domain-centric Microservices with well-defined acyclic connections in between.

What strategy we chose to adopt

We decided to split the process into steps.

Step 1

Determining domains, their connections and boundaries by taking a high-level look at the functionality contained in the application.

An important thing to understand here was that we didn’t need to dive deep into details, what we needed from this domain model was just enough information to make a reasonable decision about where to start the decomposition process.

Step 2

The next step was to use the created domain model to prioritize domains in terms of which one to convert into a Microservice first. We were able to do this prioritization based on two main factors: 1st — the ease of conversion and 2nd — the benefit that we would get from creating a microservice for the particular domain. For that, we had to consider things like:

  • The number of upstream and downstream dependencies for the domain (the ones with less dependencies may possibly be easier to convert)
  • To what extent did our logical domain model represent actual underlying code structure? (how easy would it be to decouple logical domain from the rest of the code?)
  • The usefulness of conversion (if our goal is to improve “time to market,” we might not benefit much from converting/creating microservices for domains that are less prone to change)

If we were to visualize the above, we would have something like this. See the image below:

Benefit/Ease diagram

This visualization can give us a good understanding of which logical domains are best to start with.

Step 3

After a decision about the domain was made, it was time to bring that logical notion of domain into the code level and organize it as a separate module so that all the inward connections would come in through a dedicated service layer.

This also often requires a change in the underlying database schema to separate domain-related data so that it can be accessed and modified only by its domain.

After completing step 3, you can end up with a situation where the domain has upstream dependencies, downstream dependencies, some API calls may be directed to it, and it also may talk to the database.

Decoupling of domain from the rest of the system

Here, we needed to deal with two types of dependencies: upstream and downstream.

Step 4 (Dealing with upstream dependencies)

To deal with upstream dependencies, we introduced a new abstraction layer which we called a “client.” It sat between upstream dependencies and our domain and abstracted direct calls to it.

Introducing client

Using this client, we started building our microservice. We did this by keeping our old domain functionality inside Monolith and incrementally moving that functionality to our newly-created microservice, which would be running in parallel with our system. We did this by mostly copying the code responsible for the functionality and making appropriate changes.

Something for you to note at this point is that the requests coming into the newly-created microservice were of 2 types:

  • Those that simply retrieve data
  • Those that also modify the underlying persistence state

For both these request types, we could use canary rollouts or feature flags. They would work in the following way — in the first stage, when we just moved request functionality to microservice, the client would redirect some such requests to the microservice. In the case of canary, it would be a certain percentage of the requests and in the case of feature flags, it would be based on certain conditions defined by us. If we would face bugs, we could always make a switch back to the original implementation until we fixed the bugs. We would repeat this process until we were sure that the microservice implementation worked.

Redirection mechanisms

For some “resource cheap” data retrieval requests, we could also use a verification mechanism. In this case, the client would redirect the calls to both the microservice and the old domain code inside monolith. It would answer with the “original” result yielded by monolith and would also verify if the results for this call were the same for both monolith and microservice. If the results differed, it would log the difference with appropriate error messages (here we could also monitor execution time differences between these two). By looking at the logs and making fixes in the microservice code, we would eventually get us a working microservice.

Verifying results for data retrieval functionality inside client

And lastly, for more crucial but “resource-expensive” data retrieval requests, we could use hybrids where canary or feature flag would be used combined with the verification.

Hybrid verification mechanism

After making sure that the microservice implementation worked, we would be able to make a complete switch to it.

Step 5 (Dealing with downstream dependencies)

Now that we’ve covered upstream dependencies, we can return to downstream ones.

In the case of databases, we may choose to stick with the old one until all the functionality is moved to the microservice and only after that to migrate the data.
In the case of downstream dependencies from other domains we would need to replace direct calls, with calls to monolith API, so in some cases we might need to expose functionality previously unaccessible outside of monolith through its API.

Final Step

After we’ve built a separate service, we could completely wipe out the original functionality and limit the client to the requests made by monolith itself, expose API calls through API-gateway of choice and finish the construction of the microservice.

Final result

Conclusion

Moving from a Monolithic application design to Microservices isn’t an easy process. It takes a lot of planning, time, resources and, of course, a great team of enthusiast professionals such as the one we’ve got at PicsArt. If you have that all, then you ought to go through this process and be sure that the long-term benefit you’ll get from this transition will be worth the effort.

Reference

This article is highly influenced by the great book “Monolith to Microservices” by Sam Newman which I highly recommend reading.

--

--