Breaking free from the Monolith

And how we evolved with Microservices

Vincenzo Di Nicola
Conio Engineering
6 min readJul 19, 2018

--

A Monolith ruling code monkeys

At Conio it all started with what’s commonly called as a “Monolith”. That is, a single codebase containing everything of the full application: its frontend components, backend components, API services, background tasks; hell, even devops scripts. And it worked very well at the beginning. Only a couple of software engineers, working on separate areas of the codebase (so very little chances of code change conflicts), easy to deploy. Full focus on writing application functionalities, without worrying about much else. How we approached deployment? With only a few beta customers aware of the evergoing progresses, it was not a real issue to shut down services for a while, rollout the full codebase (no matter how small or big the overall changes were, and whether they included database migrations), and then bring services up again.

It was definitely satisfying to see a product taking shape from the ground-up and receive appreciations from end customers. However, we very well knew that this approach is not fit to a modern Fintech company.

What then?

With most software applications, customers are quite tolerant. Yes, Whatsapp might stop working and have outage lasting for a few hours: definitely a nuisance, but not a perceived problem. Same goes for Pokemon Go or your favorite game app. However, that’s not the case when money is involved: mood changes if you cannot login to your bank account or are not able to make trade operations. This is even worse in the case of cryptocurrency applications: most people remember infamous blunders of the past, and whenever they are not able to access their cryptocurrency funds even for a short amount of time, speculations arise. That’s fair. It’s your money, and you should have little or no issue when you want to use it.

The Monolith above is not fit for such a scenario: any change to the codebase in production would require a full deployment, with associated downtime. Every day, we work to improve our services by fixing bugs, making our interface even more friendly, removing old functionalities and adding new ones that have better use. We often release these updates on a daily basis so that our customers can take immediate benefit, and we strive not to have any impact on the customer’s experience. That is, whatever formula we concoct behind the scenes, must be invisible to the outside world (at least, most of the times). So, we moved away from the Monolith, and chose what’s commonly called “Microservices architecture”.

Evolution through Microservices

The massive tightly-glued single codebase is now decomposed into smaller parts, each of which represents a particular service. Whenever executed, services communicate with each other synchronously via standard HTTP protocol and asynchronously via queues (handled for example by RabbitMQ and Apache Kafka).

Interactions in a Microservices architecture

It’s quite challenging to start breaking the monolith into smaller components, but it’s very well worth the effort. In military terms, it’s very similar to what Julius Caesar did to rule steadily the large territory of Gallia: “divide and conquer”.

1) Product can be continuously deployed. A code update now applies only to a microservice: in most cases it can be immediately deployed to production and released with no impact to the customer

2) Code is easier to manage. From a company organization perspective, things change when a team of 2 software engineers becomes a team of 10 software engineers. It’s more effective and with fewer code conflicts when each team member is responsible for his/her own microservice.

3) Code is easier to maintain. A Microservices architecture requires, by nature, the definition of an interface to communicate with the external world (be it the frontend app or another backend service) and it is completely isolated from any other point of view. This allows to review, re-design or even completely re-write from scratch (even in different languages if convenient) single components of the application without impacting the rest.

4) Performance can be enhanced. Each microservice may now use its most appropriate language. Heavy cryptographic computation components may for example be optimized in C, while API services in Python and long running tasks in Go.

5) Improved code isolation and security. Each microservice can be run in its own Docker container, thus providing privilege isolation, network and data segregation and, of paramount importance for a growth phase, enormous scalability potential.

Are Microservices the answer then?

Of course, there is no such a thing as a free lunch. A Microservices architecture comes also with its own set of tough challenges:

1) Operational complexity. DevOps engineers are definitely needed to smoothen the intricacies of the new deployment process.

2) Hardware bloat. Microservices are often run in Docker containers; as soon as the number of microservices proliferates, it becomes more and more challenging to run the full application on the same hardware as before.

3) Intercommunication overhead: each request might need to interact with one or more different microservices through the network. This may cause increased latency and might be subject to temporary failures. In order to implement resilient services as well as improve the scalability of the whole system, it is necessary to move interactions to asynchronous messaging (e.g. using Apache Kafka and/or RabbitMQ)

4) Eventual consistency. This is probably the hardest challenge of a Microservices architecture. Given a single microservice, it is possible to create RDBMS transactions within its boundaries. Unfortunately, though, a common issue in distributed architectures is to deal with multiple transactions that are not within the same boundaries. As a result, the system may end up in an illegal and unrecoverable state. In order to mitigate such issues, Conio adopts different strategies:

  1. Following practices of Domain Driven Design, decompose the higher-level domains into subdomains and confine them into individually bounded contexts; each bounded context is implemented as a microservice, where transaction boundaries are applied. This solves the possibility to have inconsistencies for specific subdomains.
  2. Implement idempotent asynchronous interactions, which sooner or later solve inconsistencies.
  3. Whenever possible, avoid any action that might involve multiple subdomains.

5) Complex reporting. Since each subdomain lives within a specific bounded context, complex reports that involve multiple subdomains might require to query data from multiple data sources: this may have negative impact both on the expressiveness of the domains and on the scalability of the system. Here in Conio we have adopted a CQRS architecture to support backoffice activity and business analysis reports.

6) Logging system. Each element in a distributed system contributes to the creation of the log of the whole system. Though, it is necessary to implement tools that can create the needed connections among all such separated logs in order to have a unified log for each interaction. Here in Conio we use ELK (ElasticSearch, Logstash, Kibana) stack to store and query log data: each log is enriched with the necessary correlation ids that allow the above mentioned unified log.

Never stop the evolution

Our take? Decomposing the initial single codebase must be viewed as a long-term task, with evergoing refinements. At Conio it took us a couple of years where step by step we moved from 1 massive codebase to over 100 microservices. We have arrived to a point where we do feel proud of the results, but at the same time we keep exploring. There are multiple possible new optimizations: moving from Docker Swarm to Kubernetes? Migrating backend-for-frontend services to serverless lambda functions? Switching to a full continuous deployment operation flow? The possibilities are endless.

We have touched a number of topics and technologies here. In the next articles we’ll share more details on our findings and progress. If you like, feel free to comment and tell us your experience.

--

--

Vincenzo Di Nicola
Conio Engineering

Head of Tech Innovation & Digital Transformation at INPS. CoFounder @Conio. Blockchain strategy advisor to Italy gov. Building with Italian passion & US courage