Messaging: Our choice to extract Monolithic features into Microservices

by Nicolás Darriulat

Visma Nmbrs
Nmbrs Tech Blog
Published in
6 min readFeb 3, 2022

--

As you have seen in our previous articles, our journey towards Microservices is a long path, with new challenges that we face now and then. One of them is to reduce our current direct dependency on the Monolith application. This is an important step since we are very aware that the process is gradual and that lots of our features and services will still be inside that code for a long time.

Why now?

Monolithic feature exposure is needed throughout our Nmbrs application. One of our Company Goals for this year was to establish a solid tech stack for our microservices. We are pursuing something on which we can base on with confidence, knowing that it will work smoothly for future microservices. That’s why to start the year, we chose a module to serve us as proof of concept, and in that way, we could prove that this new stack was a solid one.

The Chosen Module

The education details page seemed to be a good candidate. Its features are quite scoped and isolated, and the data is not entangled with other modules from Nmbrs, which meant that we could focus on migrating it without worrying about external dependencies.

At the same time, within this details page, it is possible to add Action Points and Documents, two features that we are not planning to migrate to a microservice yet. And that’s exactly what we needed, a module that would be hosted in a Microservice, and at the same time, it would have embedded in its UI some features coming from the Monolith.

So how then?

As you can see in the image above, by taking out the Education module from the Monolith, the UI could be placed outside of it, but still, we would have to make it coexist with two features coming from the Monolith.

So we came up with a solution. It was really important to introduce a new layer of abstraction, an intermediate layer that would provide us with the operations that we need, regardless of them being in the monolith or not. That’s why we decided to make use of these three mechanisms altogether:

  • Messaging
  • NuGet packages
  • An Internal Api

So why are these 3 components so important? They all represent different layers of abstraction between the new UI and the Monolith, and they could be the new flow for the embedded Action Points and Documents. But let’s go into detail on each of them.

Messaging

Consuming functionalities from the Monolith is not straightforward at all, and we found in Messaging a good way of doing that without having to interact directly with it, following the producer/consumer pattern: we just send a message to a queue, and from the Monolith side, the request is listened to and a reply is produced to the consumer of that queue, in this case, a Microservice.

An example on how to get a documents list using messaging

To implement this asynchronous communication technique, we used the specific implementation that Microsoft Azure provides within the .Net stack: Azure Service Bus, which we have been using for a while now. Since its usage has been increasing throughout our application, we have been establishing a solid structure and architecture around it.

In addition, we used the request/response pattern with the help of the Mass Transit framework, which provides a good set of methods especially thought to be used in combination with that pattern.

An example from Mass Transit’s website, showing how to set up a Message Consumer and indicating the Azure Service Bus connection strings

NuGet Client Packages

NuGet packages have been our way to go for a while at Nmbrs, anytime that we want to expose internally any behavior between different services. We usually use it as a wrapper to access Microservices’ endpoints, in order to define the contract to communicate with them and enforce other components to respect it.

In this case, for all of our Packages that would use messaging, the contract was the structure given in the BaseProvider. This abstract class required in its constructor several parameters such as the connection strings for the Service Bus, or the message queue name. There is also a Send method, which requires a request message that is mapped to the corresponding object on the other side of the messaging process.

As you can see, client packages helped us this time to enforce which were the settings needed to communicate with our Service Buses. This also means that whenever there’s a change in the required configuration to connect with them or even if we stop using Service Buses at all, we can release a new version of the packages and the consuming modules just need to upgrade the reference to them to follow the new implementation.

Internal Api

The complex architecture in which the Monolith and Microservices reside brings drawbacks for the Frontend components too. Usually in the Monolith, the frontend and backend would be very strongly entangled one from the other, which would simplify some things. But as we are moving away from that, the frontend files are hosted within the monolith backbone and at the same time without consuming the backend as would usually do. The frontend now needs a backend service for that, which is why we have created an Internal Api service.

This new component appears as another contract, in this case between the Frontend Widgets and the Client Packages, only for those cases in which the backend operations come from the Monolith.

In the same way, as many of the other intermediate layers used to communicate with the Monolith, the idea of this API is to be a temporary component, which will be replaced by the corresponding API when the backend is extracted to a Microservice.

A very crucial step here is to maintain the same path for the endpoints when we switch to the new API so that the transition from one API to the other is the most seamless as possible.

As you can see in the image, whenever we decide to make the switch from using the Internal Api to a Microservice API, the only change needed should be in the base URL.

Lessons learned

The initial extraction of the Action Points and Documents features is done, and so far it has been a success. The feature works as before, and at the same time, we can use all the advantages that we already know microservices bring: shorter delivery cycles, zero-downtime releases, and more modern technologies in our stack.

This is how the Education page looks like now, as you can see, all the original features are still available:

We also took the opportunity to make some UI improvements!

It has been a successful proof of concept, which motivates us to apply it in future migrations and monolithic feature extractions. That is why after proving its success, the BaseProvider that I mentioned before keeps on growing and is slowly becoming more and more used throughout the application.

This journey is just starting, so stay tuned to our blog and get to know more about how we keep on exposing our other features!

See our Instagram and Blog for an inside look at what working at Nmbrs can mean for you!

Nicolás Darriulat, Full-Stack Developer & Scrum Master at Nmbrs

About Nicolás

Originally from Uruguay,
Nicolás has been a full-stack developer at Nmbrs for almost 3 years. Since then, he has been working behind the scenes around the Nmbrs HR features, always contributing to what he enjoys the most about this field: bringing values to the customers in order to simplify their daily lives. He is a firm believer that software must be delivered in the shortest delivery cycles, that is why he believes microservices are the way to go!

--

--

Visma Nmbrs
Nmbrs Tech Blog

All about getting your HR and Payroll done together!