The less-talked-about benefits of microservices

Cristian Vrabie
Fixter Tech
Published in
8 min readDec 19, 2018

--

My Kamet colleague, Samuel Roze, recently posted an excellent article about the disadvantages of microservices, especially in the context of a startup. The post does a great job at diving into some of the real trade-offs and challenges that come with microservices and got me to revisit my reasoning and thoughts about them.

I reaffirmed to myself why they’re a good strategy and why they’re worth the effort. In the process, I discovered that some of the things I value about microservices are not necessarily the big advantages that advocates generally brandish, like effective scaling and cost reduction. Instead, there are a lot of smaller, additive benefits that are worth more attention.

1. Component decoupling

Everyone agrees that component decoupling is a must-have in any software endeavour bigger than a prototype. Even Monolith advocates agree that you should be aiming for a well-decoupled set of components. Not doing so is going to cost you dearly, with cascading technical debt and eventually with your team’s sanity.

One of the most significant advantages of microservices is that they do a good job to enforce decoupling, but even more importantly, they create the proper frame of mind to think about your software as a set of decoupled components.

While technically you can do this in a monolith, in over a decade in this industry, I am yet to see a startup that managed to adhere well to this philosophy and stick to it throughout their development lifecycle. Maybe you can do it if you have a team of only senior developers, with strict discipline and great code review practices.

In reality, though, my experience tells me that it’s only a matter of time until the business pressures force the team to cheat and rely on the implementation details of another component, directly import one module into another, or introduce a circular dependency.

I strongly believe that “Just try harder” is rarely a suitable solution to a problem, be that in software development or dieting. What you want to do is create a setup that nudges you in the right direction, removes as many barriers as possible and gives you the proper tools for the job.

Many of the other advantages I’m going to talk about are directly stemming from this architectural decision, so it’s worth overstating its importance. The best weapon against snowballing technical debt is decoupling your components.

2. Increased security

A key concept in Information Security is giving a person or a tool the minimum amount of permissions they need to perform their job. This applies to developers, system administrators, case managers as well as software.

It is much easier to restrict the permissions of your software when it’s deployed as isolated modules.

Let’s take for example file upload, one of the most notorious attack vectors. Even in 2018, we’re still discovering 8-year old vulnerabilities in basic code. If the service that handles files uploads doesn’t have permissions to read files and has a restricted view of the database you are significantly mitigating the damage a vulnerability can do.

At Fixter, we’re using 4 methods to restrict the permissions of a service:

  • Each microservice runs in its own isolated Docker container and has no direct access to the host or other services
  • Each microservice has the minimum necessary access to the AWS infrastructure, through individual IAM Roles
  • Each microservice is blocked from talking with non-relevant services, instances or database through Security Groups
  • Each microservice uses individual users to connect to the database, that only gives it restricted access to the tables it needs

I believe that these are important steps to slow down or stop security breach cascades, and all of this is made simpler by the strict separation of code in microservices.

3. Increased resilience to defects

Similar to the security advantage, organising your code as microservices can make your system more resilient to defects by isolating failures through well-defined service boundaries.

Part of this is about keeping individual components less complex, with a clearly defined role. There is simply less that can go wrong in a service that only does one thing.

Another aspect is limiting the amount of damage an application-crashing bug or a memory-leak can have. Only one service will be affected, rather than the entire application

But the main advantage is yet again the nudge towards good architecture. If your entire application stops working when one of your services is misbehaving, then you’re not building microservices. Instead, if you’re doing it right, you’re probably already thinking about graceful service degradation, retry logic, self-healing and failover caching.

At Fixter we pride ourselves with one statistic. Every new joiner to our engineering team deployed something to production in their first sprint. Most of the confidence to do this comes from the quality of the people in our team, the code review process and the 6 layers of automatic and manual testing we do. But knowing that a failure will be isolated and not cascade into the entire system certainly helps.

4. Easier to experiment and evolve

For a team to stay agile, to continually grow their skills and develop new ways to approach a problem, engineers need to be able to experiment. We need to experiment with new technologies, frameworks, libraries and methodologies. And sometimes, for the long-term health of our product, we need to refactor to make use of the latest innovations.

In a long-spanning product, experimenting and refactoring are hard to do because we need to balance everything against the cost of developing new features and maintaining the existing ones.

We can create a safe space to be able to experiment with hackathons and personal project allowance, but your main product is going to have a hard time to change.

I argue, that by using microservices it is easier for a product to evolve because you can gradually switch to a new framework or language.

You can start experimenting with new technologies in an isolated environment, that is not part of your core business. Once that proves useful, you can incorporate the change in all new services. After that, rather than spending massive amounts of time and effort to refactor everything at once, you can upgrade all existing services one by one, as your schedule and appetite allow.

5. Easier versioning

I’m a big fan of backwards compatibility. It makes life so much easier if you don’t have to worry about migrating every bit of legacy data and code when you make a big change to the product. However, it’s not always practical to maintain it. Sometimes it’s not worth the effort.

So, let’s say you want to release a new version of your API /v1/users that is not backwards compatible. You now need to either:

  • Update all clients that use that API. This is very labour intensive and sometimes the speed of client upgrade is out of your control.
  • Create the new API as a new endpoint (ex: /v2/users) so you maintain both until all clients have been updated. This works but starts to get tricky if the two endpoints have an entire tree of dependencies that need to be maintained in two versions, use the same libraries at different versions, or is a complete paradigm shift.
  • Deploy the entire monolith twice. This might be resource intensive, and you might expose other endpoints with unpatched vulnerabilities.

With microservices is both easy and resource efficient to maintain multiple versions of the same service running in parallel.

6. Faster build and deploy times

This one is straightforward. Less code means less to compile and fewer tests to run. Might be an important benefit if you’re trying to roll out that critical bug-fix as quick as possible.

At Fixter, each microservice takes only a few minutes to build, test, package and deploy with Travis. Some of them take less than one minute. The monolith we had two years ago took more than 30 minutes, and we used to have significantly less features back then.

No more shouts of “Who’s hogging Travis?”, across the office.

7. Fewer conflicts

As with build times, more independent projects means easier parallelization of work and a smaller chance of conflicts. In fact, we have not had a serious case of GIT spaghetti in over two years.

When everyone’s time is precious, minimising the risk of merge conflicts can save you from lots of wasted time.

It also minimises the chance you’ll break someone’s different module because you upgraded a library to an incompatible version.

The disadvantages

While I’ve scoffed at several articles that just dismiss microservices with only the argument “I don’t use them and it’s working fine”, Samuel’s article actually hits upon some real challenges they bring. I recognised many of those in my own work, and can’t ignore them.

In particular, the extra complexity added to discoverability and debugging are real kill-joys, that can be mitigated, but not completely fixed with good tooling and documentation.

It does not help that the entry skill level for working with microservices seems to be higher if you want to do them right. You need people with prior experience in decentralised systems to guide the team, or you risk building a distributed monolith which is the worst of both worlds.

And like Samuel, I do believe that premature optimization is the major cause of failure in startups. We must not try to solve problems we don’t have yet, which is a big temptation when the system has so many potential advantages.

I did my best not to fall prey to confirmation bias when looking back at why I use microservices. I tried not to maximise the advantages and not to dismiss the disadvantages in my valuation and look at genuine problems our company had and fixed with this system.

The reality is that this analysis is going to be determined by individual businesses and contexts. What might seem like a minor inconvenience to one team, might be a total deal breaker to another. When evaluating this for your business I suggest you take a careful look at:

  • the business (size, stage, budget, the appetite for innovation)
  • the product (number of moving parts, potential growth curve)
  • the team (size, seniority, culture)

Conclusion

Microservices are not a panacea for all problems. They’re not even architecture. They’re a method of organising your software components that comes with a myriad of benefits and challenges, some bigger than the others.

While big companies will benefit greatly by the simplification of scaling, cost reductions and greater reusability of functionality, they have a place in the start-up world too.

By forcing you to think more about the way your components interact, a busy team can be pushed on the path of better architecture.

By isolating the different aspects of your product, security and failure risks are contained, while allowing your team to be more agile.

What do you think? If you would like to share your opinion privately, you can use this form and I’ll try to address any questions.

References and further reading

  1. Designing a Microservices Architecture for Failure
  2. From Microservices to Serverless: How to avoid converting “Distributed monolith” microservices into “Serverless monoliths”
  3. The Death of Microservice Madness
  4. Monzo: Building a modern backend
  5. Don’t Build a Distributed Monolith
  6. Optimization Mistakes that Kill Startups

--

--