Unleash the Microservices

Igor Rusinov
TourRadar
Published in
7 min readJun 7, 2019

Everyone talks about microservices these days, a trend that has been going for a while. Most of what you read online, however, can easily make you think microservices are the silver bullet: it’s easy, painless, and nothing can go wrong. It’s the solution to all complex problems.

That is, of course, not true. When you start with a monolith, which tends to grow over time, and then decide it is time to move to a microservice architecture, it’s hard to know where to start, and which steps to take.

In this article, we highlight why this transition made sense for us at TourRadar, and describe a couple of situations that were part of migrating a significant chunk of a big monolithic application into more than 20 microservices.

Why a Microservice Architecture?

In one word: agility. We are big believers in working in small batches, iterating fast on our features and products, and learning as quickly as possible about what works and what doesn’t. As our Engineering, Product and Design teams grow, we must “divide and conquer” and look to reduce dependencies across the different squads as much as possible. The less dependencies, the faster each team can move forward and learn.

This is as much an organizational structure problem, as it is a software and systems architecture problem. Being mindful of Conway’s Law, we restructured our growing number of teams to become fully cross-functional, while at the same time investing in the orchestration necessary for a real microservice architecture.

While a monolith in itself is not necessarily a bad thing, the reality is that fast-growing startups rarely are in a position to grow it with a solid technical vision in place as they’re busy actually figuring out if there’s a business for their product. As such, quality suffers and it’s easy, over time, to end up with a Frankenstein of sorts — code that is not modular enough, not well tested, and certainly not clean. Unfortunately, refactoring large untested codebases is far from a trivial problem, and it is usually more straight-forward to think about how to split the damn thing into nice, tiny services.

Photo by Marc Mueller from Pexels

New Features: Inside or Outside the Monolith?

It always seems easier to add new functionality to legacy application, as you have easier access to all the existing code and data. But the more you add, the harder it tends to become. In order to make a small change, you have to build and deploy the entire thing, which can become slow, and break unexpectedly in unpredictable ways. As we know, legacy codebases tend to not be very well tested, to put it mildly. The result is teams that becomes slower and slower, less resilient applications, and unhappy developers.

In order to start splitting your monolith to microservices, you need to change the default way you think about developing new functionality. Once you start planning the implementation of a new feature, the question becomes: does it belong inside an existing service or domain? Or is it big and independent enough that it could become a separate microservice along with its own data store? Either way, adding to the existing monolith should be a last resort.

A Staged Approach

When you have a monolith that has been developed on for years, most of the code is usually tightly coupled, and it gets tricky to start splitting off functionality from it into separate services. When you consider migrating also part of the database into an independent data store for that functionality, creating this microservice can easily become one of those “born to die” projects.

With this in mind, it makes sense to take a gradual approach. As an example at TourRadar, we faced the problem of scaling up our email sending functionality to millions of emails per hour. Faced with this challenge, we decided it would be a good time to rebuild this separately instead of trying to change it in the legacy application.

As a first step, we created a simple service that received an email address along with some prepared data to be used in the emails, to be sent via a mail server. With that microservice in place, we started to slowly migrate each email type one by one to be sent “the new way”. As some types required custom logic (e.g. A/B testing), we implemented this in the new email service.

Once that was done, we focused on extracting the relevant part of the legacy database as well. As a result of this staged approach, a big chunk of legacy code that we initially feared was too complex to handle was now outside the monolith. However, we still found some logic had to remain shared between the two codebases. These are good candidates to be abstracted away into new services in the future, to be used by both applications.

Frontend Microservices

What we described above is not, however, exclusive to the backend. Improving and refactoring complex, front-end related logic is also something we have been tackling.

One of our most important pages is what we call the Booking Conversation Page (BCP). This is where our customers track the status of their bookings, request additional services and in general communicate with us and tour operators. This page was also made up of a lot of legacy code that was hard to manage, so when we decided to redesign it with customer experience in mind, rebuilding the whole page at once wasn’t an option. Again, we decided to tackle it in small iterations, block by block.

As a first step, we relaunched one of those blocks: the Price Calculator. This contains some information about the tour and operator, along with details like price, discounts and any additional services requested.

To begin with, we needed to retrieve the booking data for this price calculator block. Using the strategy described in the previous chapter, we created a small API that returns the required information. The front-end component was implemented using React, as a separate microservice, using this new API to fetch and then display the data.

That said, the BCP combines a lot of data from different sources. With that in mind, we also felt GraphQL applied well to this type of use case and so implemented a GraphQL API Gateway, allowing the price calculator client to request data from multiple APIs easily. In the end, we replaced the old block on the page (and legacy code behind it) with this new React microservice.

Using this pattern, we were able to refactor and relaunch the different BCP components one by one. Once again, the strategy of tackling the legacy in small iterations and delivering value often was helpful for us.

Conclusion

Every architecture pattern comes with its drawbacks and benefits and microservices are no exception. At the end of the day, it comes down to making the decisions that you feel will most help your team and your company.

To be clear, we did face a number of challenges during this transition:

  • Increased complexity of infrastructure. You need to set up CI/CD for every microservice separately, instead of configuring it once for monolith application. Also, creating and supporting infrastructure for every new service can get tricky, if you don’t have proper automation in place. We have solved this, for the most part, through a standardized CI/CD setup, as well as describing all our infrastructure as code.
  • Performance issues. Some scenarios, which required only one backend call to the monolith application, translate into complex chains of service calls in a microservice architecture. This is something you should consider when you architecting your system. Some functionality cannot be executed on the fly using microservices, so you need to think as well of caching strategies, or implement some logic using asynchronous processes.
  • Increased development complexity. Microservices make it harder to have all services running locally, leading to more complex end-to-end testing. Also, it’s more difficult to debug errors across multiple services, so you should really focus on the quality of monitoring and logging for your services.

Nevertheless, introducing microservices at TourRadar gave us the possibility to work more effectively, as each team can focus on different domains and release independently. This allowed us to scale our tech team from 17 to 53 people in 2018 while releasing new functionality every day.

Creating small microservices also allows us to make use of modern technologies best suited to each use case. With the monolith, we were coupled to legacy technologies and frameworks, as it was prohibitive to simply migrate the whole application to something new. Importantly, in order to simplify management and add predictability, pretty much all our infrastructure is described using Terraform. And needless to say, we keep improving our logging and monitoring, standardizing as much as possible across all services.

Going forward, we’re also exploring increasingly more serverless patterns using AWS Lambda, as it removes most of the infrastructure management overhead and allows us to build and deliver new services faster, focusing even more on the business of our applications. This is a meaty topic and something for a future blog post.

Interested in joining TourRadar and helping provide life-enriching travel experiences? We’re always looking for talented and passionate individuals to join our engineering team!

--

--