We can do better than microservices

The happy code
6 min readSep 1, 2019

--

Building software using microservices introduces more problems than they solve. Blogs are being written about companies that have tried and returned to monoliths, and some of the biggest companies have avoided them altogether.

I believe that the problems that microservices set out to fix are indeed problems that people face and this is what made them so popular in the first place so in order to convince people to stop building microservices and return to their monoliths we still need to solve these problems and we need to do so in a better way than microservices did.

If you just want to find out how we can do better, feel free to skip the next section

What’s wrong with microservices?

Saying that microservices cause more problems than they fix is a bold claim. I’ve had first-hand experience working with them for many years so in this section I’ll attempt to explain why I have come to this conclusion

Refactoring

Refactoring code across services is substantially more difficult than it is within a single code base.

Moving a feature exposed by a service into another requires deprecating it first in one service while adding it to another, enumerating all the consumers of this feature (hope you know them all) and updating them one by one to use the new service.

You may have contract tests to help find the consumers and you may need logging to be truly confident the feature is not being used in the first service.

Compare this with a monolith where, rather than organising features by service they’re organised by directory, you simply move the file from one directory to another and (with the help of your editor) change the namespace.

With microservices touted as a way to enforce better organisation of code compared with a monolith, the reality is that they deteriorate quickly into a worse state because the cost of refactoring is so high.

Performance

The overhead of a function call compared with the overhead of calling a service is huge.

Whether your services interact synchronously or asynchronously you still pay the extra cost of CPU and memory.

Restful services (not essential, but common) typically require data to be encoded and decoded into json, this then passes through the networking stack, sometimes physically to another server to be processed.

There’s not really anything to compare this with in a monolith. For many cases the overhead of a function call is about as close to nothing as you can measure.

Complexity

Microservices don’t live in a vacuum. They’re components that interact to form a much larger system.

Anytime you need to see the system as a whole microservices make it more difficult to do your job. Trying to debug an issue across multiple servers is an example that becomes much more difficult.

With strong recommendations for microservices to not share any state (lest you create “coupling” and produce a dreaded, but distributed monolith), most implementations of microservices that I have seen have an astronomical amount of effort dedicated complex and intractable synchronisation of the same data, duplicated across numerous databases.

Although the ability to integrate services written in different technologies and programming languages is often presented as an advantage of microservices, this can easily lead to a huge increase in skills required for a software engineer to work across the whole stack (and in many cases software engineers can’t, therefore reducing their ability to change teams within the company)

Deployment

Simply deploying a large number of services takes significantly more effort than deploying one.

Unless you have invested significantly in automating everything about creating a new microservice there will always be an incentive to add to an existing one, rather than creating a new one. This includes provisioning servers, databases, queues, dns, routing, logging, performance monitoring, alerts, etc.

Testing

Since microservices are traditionally deployed independently, it can be impossible to test your product with the version of each service that will end up in production.

For example, if you have a shared staging environment, it will likely contain a mixture of services that are the same version as production and the next version as other teams use it for testing

Of course, with a monolith this is not a problem at all. You test a commit of all the code which is atomic, and this is what gets pushed to production

Unless you are running all microservices locally, you’ll also likely be developing your microservice while interacting with possibly unstable services too, potentially wasting time debugging issues originating elsewhere

How we can do better

The easiest way to address the problems above is to stop building microservices and build a monolith instead. Certainly there are ways of working around some of these problems which are effective to varying extents, but at some point we just have to say enough is enough and microservices are not worth it

Of course, by doing this we also lose the benefits they bring, so here I’ll present some alternative ways to achieve the same benefits that made microservices so successful in the first place

Fault tolerance

With microservices, if one service goes down, it does not need to bring the rest of your application down. The same can be true of a monolith.

Microservices are often organised around business capabilities and that may not even mean the correct level of isolation, but with a monolith it is possible to deliberately route traffic to different servers using whatever strategy you prefer.

You want payments independent of signup? route /payment to a different group of servers than /signup.

You could even go further than that, isolating your paid customers from the free if you really wanted to.

Run several copies of your application in separate containers on the same server if you wish but make intentional choices around what tolerance you want.

With a monolith, it is common (and recommended) to share a database though it is reasonable to have multiple databases for different needs (data vs files, ephemeral vs permanent).

Instead of relying on microservice boundaries to dictate which logically separate database to store data in, database sharding and replication should be used to make automatic choices.

Testing

As a monolith grows the number of tests increase. As the number of tests increase the length of time it takes to get feedback around a given commit increases.

When building microservices, it is common to only run tests for the service that was updated and while that solves the problem somewhat testing across service boundaries is still valuable.

With a monolith, it is possible to determine pretty accurately which tests should be ran for a given code change.

This provides the same advantages that microservices do (avoids running every test on every commit), in fact, you’ll probably need to run fewer tests on most commit, and the times more tests run it’s because they actually test your code change.

Microsoft’s test impact is just one example of an off the shelf tool that can do this.

Deployment

While researching this article, I found several articles claiming that ease of deployment was a benefit of microservices.

Since my experience has been the complete opposite I can only guess at what this might mean.

With a monolith you will likely deploy the same code to each of your servers in it’s entirety but if you use a deliberate routing approach (as previously mentioned) it is likely that not all the code will be in use.

By simplifying the deployments in this way, it allows your team to refine it too. Rather than investing the time + effort working out how to deploy so many services at the same time, use you new-found bandwidth to perfect the deployment of your monolith. For example, by pushing a canary deployment out first.

If the thought of having dormant code deployed to servers worries you, webpack’s tree-shaking or other similar dependency tracking could likely allow you to deploy only the code you require but this seems unnecessary.

The advantage of having a copy of your entire application on every server means though means if you do need extra capacity for one feature, and have spare capacity elsewhere you don’t even need to deploy anything, just re-route traffic! (And people thought micro-services were “highly-scalable”)

Final thoughts

Most of the proposed solutions here just point out that there are automated ways to achieve the same outcomes that many companies are attempting to do manually.

Let software engineers organise their code in directories that make sense for organisation and let machines work out the best way to scale it and provide the desired fault tolerance

I’m a software engineer myself, not a consultant, living and breathing these decisions. These ideas are not only theoretical but tried and tested. I’m only recommending them after first-hand experience

--

--