From Monolith to Microservices — Part 1: Reasoning behind the switch

Karol Słuszniak
Fresha Engineering
Published in
5 min readNov 27, 2017

Migrating the web application from monolithic architecture (in Shedul’s case — Ruby on Rails) is one of the most serious decisions one can take in any monolithic project’s lifetime. It’s a call to action that involves the whole company and it’s a paradigm shift that takes months to make it happen. But it’s also a basis for success of the big players like Netflix or Amazon. This is the first entry in a series about our switch to microservices at Shedul.

It started with a vision of benefits that await at the end of the painful road — those which were so big and important for our business that they justified all the hardships engraved on the dark side of the coin. You can find tons of articles listing pros and cons of microservices in general — but in this entry, I’ll tell you about top three that made us make the actual call.

Reason 1: Features

If you’re stuck with a monolith, you’re stuck with all its limitations. It’s all good as long as those limitations are out of your business agenda anyway. But sooner or later you’ll hit roadblocks with your monolithic technology of choice — missing parts required for making that next big feature happen. I believe this to be a case regardless of the technology — simply because there’s just no single technology or framework that’s good for everything.

In our case — the Ruby on Rails case — one of main limitations was the WebSockets support. Since it’s hard to consider Rails 5’s ActionCable as viable option for the scale that our project is aiming to hit in upcoming months and we have a lot of real time features on our agenda, we just had to add another building block to our stack. And guess what — even if it’s just a single building block missing and it requires just a tiny extra service written in another language — it’s still a switch to microservices with all its glory and pain.

In case of Shedul, the future-proof language/framework answer to real time needs was the choice of Elixir and Phoenix, which I’ve described in detail in the Choosing Elixir for Shedul’s tomorrow article.

Few months later, I can already see that the Elixir part is only one of the puzzles that we can now freely pick according to our needs and plug into our architecture. Other notable addition is Node for server-side rendering of React code. Good luck teaching Ruby (or Elixir for that matter) to do that. There may be fancy gems for that, but in the long run an attempt to glue something into unfitting stack will only bring more chaos and tech debt to the project.

Reason 2: Data

One of the biggest technical debts of the early stage rapid development of system with quickly growing complexity — particularly written with Ruby on Rails — is the database. The framework itself, along with its model system based on ActiveRecord, teaches “the Rails way” of treating the database as a singleton storage for all business contexts and development needs. The problems span across not just the database schema, but the model code which also becomes buggy and hard to maintain thanks to Rails-encouraged global model validations and callbacks (read more on that here).

We’ve decided to bring that to an end in the most natural moment possible — when starting the development of Fresha, our first separate product. I can only imagine the maintenace, deployment, codebase and database hell we’d be in now if we’ve attempted to glue in Fresha into Shedul’s monolith. Now Fresha is a separate system with detached data model for fast searches and dedicated database servers that make it (mostly) independent from Shedul.

Our growing number of microservices also follows the same pattern to achieve the best scalability. The process of reworking the existing monolithic model and making the data storage completely microservice’ish requires a proper long-term planning and a thoughtful execution. That’s because the basic goal is to avoid any downtimes due to the global nature of our system. For sure it’s one of the most interesting tasks to take as a senior developer.

Reason 3: Operations

Switching to microservices opens a new world of possibilities when it comes to managing and scaling production. Here’s a quick sample:

  • Ruby server may (and does) require more memory than other techs
  • Elixir server may (and does) benefit from multiple cores more than others
  • Specific service may require special traffic rate limiting
  • One app may require rolling deploys while other should not

All of these needs can be satisfied with the microservices approach while they’ll be impossible or will require ugly hacks to achieve with a monolith.

One of the biggest savings doesn’t actually come from the flexibility of assigning resources, but from the sole ability to pick the best technology, framework, storage or messaging system for the right job. This of course requires wider know-how and more effort to execute — which is why it’s only a good choice for mature businesses that know exactly where they’re going and how much they can afford to pay for that.

But there’s an another important side to the operations part of the story — managing the actual codebase. Even if you’re fine with your monolith in general, at some point of the project growth, the amount of code will require a smart structuring in order for development, code reviews, CI setups and production management to go smoothly.

One of problems with Rails monolith is that it encourages a project structure in the app directory that’s based on class functions (controllers, models etc) rather than on business contexts (which can only be nested within the functional directories. This creates a ton of problems with developing distinct features and products within a single app (btw, a problem solved in Phoenix 1.3 with contexts and in Ecto 2.0 with schemas).

Going for microservices has allowed us to make it clear and distinct as to which part of the product we’re changing by touching a specific part of the codebase, as well as to have a much more useful feedback loop from the CI. To leverage those advantages, we’ve had to make some smart DevOps choices and efforts, such as setting up Docker and CI with proper build caching. I’ll cover these in detail in upcoming articles.

What’s next

Having cleared “the why part” of the microservice story, there’s much more to come. Next I’d like to describe the biggest challenges that we had to face when actually making the switch from the monolithic approach. Stay tuned!

--

--

Karol Słuszniak
Fresha Engineering

Software engineer. Ruby & Elixir developer with a history. Husband and father. Enthusiast of game and 3D dev.