Microservices and when not to use them

Florian Pfleiderer
Digital Frontiers — Das Blog
7 min readOct 25, 2019
Photo by Jonathan Borba on Unsplash

For several years, I’ve been helping various customers and projects adopt microservices as a strategy and they could benefit from doing so in various ways. The advantages of microservices have often been described - scalabilty, independent development and deployment cycles, having different teams work on them, to name just some.

I have been and still am a great advocate for using microservices. But when I recently joined a customer project, I saw the pattern massively abused and it resulted in lots of unnecessary problems and complexity. Ultimately, the best countermeasure was throwing away the microservices and going back to a monolith (or a “modulith”, a term I first heard from Oliver Drotbohm during one of his talks).

What went wrong?

When the project was started, somebody proposed using microservices. Even though that is not a bad decision per se, there was a time when wide parts of the functionality were still rather foggy, but some developer had decided that they’re gonna need exactly these 10 microservices. As it later turned out, the services had been cut rather bad, implementing functionality always involved changing multiple services, replicating enum values to other services. All in all, they had built a distributed monolith. All the complexity of microservices, but without any of their advantages. An interesting observation is that, even though over time the team learned to hate working on their services, all of them defended the microservices architecture pattern like it’s the holy grail and it will surely bring some benefits in the future.

So, are microservices a bad thing?

Of course not. As I already stated in the introduction, I’m a big fan of microservices myself. But designing them up-front on the drawing board is the bad thing. In my opinion, microservices do emerge when you already know your domain(s) and see an opportunity to pull out a piece that is rather independent from the others and get a real benefit from doing so. When talking about the strangler pattern, people usually refer to legacy software that is split into cool new microservices. But I think it is also a valid option to start developing your all-new piece of software as a single service and pulling out components into services as you go.

As Martin Fowler states it:

1. Almost all the successful microservice stories have started with a monolith that got too big and was broken up

2. Almost all the cases where I’ve heard of a system that was built as a microservice system from scratch, it has ended up in serious trouble.

To put it into perspective: The service I recently worked on actually queried two different backend systems via REST, converted the received JSON into a custom format, replicated some data into its own database for caching and served the results via its own REST endpoints. All of it sprinkled with a bit of user and license management. Honestly I do not see 10 services in this. But since it was already cut that way, a simple change like introducing a new type of license had to be replicated into 9 other services, since all of them wanted to check if a user had a valid license.

After joining all of the functionality in a monolithic application, none of this technical complexity remained. I can really see that other microservices will be extracted from the monolith in the future, but when doing so, they will have a clear cut and an isolated domain.

In the rest of this article, I want to share some of my thoughts and learnings on the topic with you.

Keep it simple!

Photo by Jo Szczepanska on Unsplash

Software developers tend to over-engineer things. Always keep in mind XP-principles like KISS and YAGNI.

I would recommended to start with a single service instead of cutting several microservices from the start. In my experience, the cut you decide on without knowing the software first, will be the wrong one.
To guarantee having separated modules in your monolith, consider watching Oliver Drotbohm’s talk about “moduliths”. He also provides an ArchUnit plugin to verify your architecture follows his guidelines.

Cutting is easier than glueing

Extracting a new service from a monolith is by far easier than getting wrongly cut services back together. Unifying data types that have evolved in different ways or might have been saved in different kinds of databases, different programming styles or even languages — putting those pieces back together once you figure out that your services are not cut the right way, requires a considerable amount of work.

In our app’s first implementation, classes representing users or licenses were scattered throughout several services and any modifications would cause huge effort. When services’ data models depend on each other, you end up with a “distributed monolith” and lose most or all of the benefits offered by microservices.

You should only extract a new service after you’ve thoroughly understood your domain and you actually found an isolatable piece of functionality.

Think about communication

Photo by Eduardo Sánchez on Unsplash

Synchronous communication is expensive…

REST queries cost time. When your requests has to go through several services before returning, these times can quickly accumulate to a length that negatively influences your user experience.

When using microservices, you should opt out of synchronous network communication as early as possible and optimize network routes.
By having a monolithic application, you can mitigate wide parts of the problem. Performance still matters, but calling a function within the same JVM provides far more manageable complexity.

… so asynchronous communication is going to fix that, right?

Asynchronous communication might look like the silver bullet that solves runtime problems of synchronous calls.
But dealing with eventual consistency, message loss (or delivery of the same message multiple times) and distributed logic all have their own problems.

Using a bus system can be very helpful for transporting events through the system, but should not be misused as a means of remote method invocation.

Code generation was not as good as it sounded

Photo by Wokandapix on Pixabay

Initially, as anyone knew they’d be writing lots and lots of services, it seemed clear that a tool to easily generate new services in a unified way would come in handy. With that in mind, using JHipster sounded like a good idea. The lesson we had to learn the hard way: for us, it wasn’t.
JHipster might add value to projects when used in a very disciplined way. But whenever we deviated from the intended use the slightest bit, the drawbacks massively outweighed the benefits.

JHipster can generate and update code that is the same for various projects, which sounds very charming for a project with multiple microservices.
In our experience, each service need minor tweaks to generated classes that are erased on every update and need to be re-done each time.

Skipping some updates and then updating later for single services always caused problems and lots of manual work had to be done.

And we did not save as much time as expected by writing the code generator plugin once and then migrating multiple services.
It is much harder to detect problems within the generator project, since you can not run your application’s tests. Oftentimes we would find bugs in certain services after the migration and had to fix the generator, subsequently re-appling it to all services.

On top of that it always felt like not all of the generated code is really necessary for each service. Each and every one of our services created a database and various collections in our MongoDB, even when persisting anything was not part of the functionality. In the end, around 70% of our code was made up of generated code, and nobody could tell how much of it we really needed.

Moving away from code generation felt really liberating as we could which code we want to write and how we want to do it, without having it accidently removed on the next update. The one thing that every single team member could agree on when removing it from the project was that we, personally, would not use a code generator again.

To kick off new projects, we at Digital Frontiers are big fans of the Spring Initializr. If you’re interested, check out my colleague Frank’s blog on customizing it according to your needs.

Microservices need code to solve their own technical complexity

Microservices can add a lot of value to a project, when needed. When they’re not necessary, they primarily introduce complexity.
To master this complexity, usually many lines of code have to be written. REST clients or message queue subscriptions are just some examples for that.

Martin Fowler talks about the “baggage required to manage microservices”. When the technical complexity of the microservices greatly outweighs the complexity of the underlying functionality, you’ve probably chosen a wrong approach.

source: https://martinfowler.com/bliki/MicroservicePremium.html

When migrating our architecture, we could get rid of huge amounts of code without losing any functionality. Our overall lines of code dropped from ~28.000 to a mere 5000, while the number of files in the project was reduced from over 500 to around 100. The deleted code might have been much more than what was really necessary just to handle the complexity introduced by the microservice architecture, but it illustrates rather well the one thing that should be your key takeaway from this blog post: Think lean!

Using a well structured modulith helped us to get our development speed up again and gave us the flexibility we needed. We all often heard of migrating a legacy monolith to microservices, but for me this was the first time migrating legacy microservices into a monolith.

What are your experiences with the two architectural styles and would you still start with microservices from day one?

--

--