The hype has died down and developers have now had a chance to get their hands dirty with microservices. I’m one of them. As with any new hot new flavour of the month, sometimes the promise doesn’t match the lived experience. Here are some of the things I have found through working on two large separate microservice based projects.
Hard to change.
‘Wait, what?’, you say, ‘Changing things easily is meant to be the main benefit of the approach!’ In my experience the opposite is true. Sure, making a change within one microservice that does not affect an upstream or downstream service is easy. But these changes are rare, and have little business value — they are also known as ‘refactoring’. A change with business value will almost always affect other microservices. And now that your code is split across numbers of different repositories, understanding what calls your code and what the impact of the change will be for clients is harder to reason about. Complexity has moved to the interactions across service boundaries. Contract tests can help, but are much heavier than simple unit tests.
It really does. Compared to making a simple method call in any language, the hoops we have to jump through just to call an HTTP REST API (the default communication mechanism for microservices) are ridiculous. How much time has been burnt by developers mucking around with HTTP headers? Why do we have to gimp the expressiveness of our APIs by only allowing them to be defined as untyped CRUD operations? I’m looking at you, REST. Anytime you split from one service to two, what used to be a simple method call becomes a much more difficult interaction across a microservice boundary.
Cross Cutting Concern Drift.
We’ve got a new way of logging, we need to deploy in a new way, the way we manage configuration has changed, we’re bumping a dependency for reasons that also apply to our other microservices, we’ve split out code to a shared module. These are cross cutting concerns, and may need to be applied to all other microservices that have been developed and may even be in production. Who wants to roll this boring cross cutting concern change out to the other microservices and nurse them into production? Nobody, that’s who, so it doesn’t happen.
Zombies & Graveyards.
Just as larger codebases can become unloved, so too can a system of microservices. Developers become lazy and don’t roll out changes to a pattern across existing microservices, succumbing to cross cutting concern drift. Teams finish a microservice and move on to the next thing. Once the service loses its owner, it’s now in zombie mode. It is no longer loved, and no longer actively updated. One colleague of mine needed to change one of these zombie services to add a bit of extra data to a payload it returned. The change was simple, but because of drift the work involved in getting the change into production took weeks and stressed him out no end. Any microservice that does not get deployed to production regularly is a zombie service.
We are only just starting to see the follow on from microservice zombies: microservice graveyards. These are entire clusters of services that no one really owns and no one really wants to own. They are out there, and maybe they can still process requests, but they are really just markers of a system’s death. Tombstones, in other words.
Ops, poor ops. In the highly likely event that their dev team was not tall enough to ride microservices but did anyway, they are now managing an order of magnitude more services than before, and nothing has really improved, in fact they’re extra busy trying to track down a unit of work’s failure across a number of services instead of one, because:
Distributed computing is hard.
This is well known. Each microservice means more distribution, and more distribution means more hard computing.
You want a new microservice? Well you’re probably going to need:
- Config per environment,
- Code style rules,
- Application definition file (including dependencies),
- Obligatory Dockerfile,
- Build scripts,
- Deployment scripts,
- CI/CD pipeline definition,
- HTTP entrypoint definitions,
- API documentation,
- A README,
- Healthchecks / Monitoring,
- Registration with some security system thing,
- A review from a special team that will need to give your new service its blessing before it can be deployed into production, and
- The actual code!
If you don’t split, the only thing you need to create and manage is the actual code.
Massive developer productivity drop.
Due to reasons above. Contrary to what management might think, we developers like being productive. When we’re bogged down in cruft, we are sad.
How did we get here?
Microservice architecture is a reaction against developer frustration with working with large systems that are past their prime. Imagine a system that takes forever to build and run automation tests; it is tricky to deploy; management have become fearful of changes and now each release is mired in bureaucracy and checklists; a number of distinct teams are treading on each other’s toes; and the branching and merging strategy is out of control. The dreaded monolith. No developer in their right mind would want to work on this application, but a lot of us have to. Now imagine that a new architectural paradigm comes along that lets you leave this world behind, and create a shiny new service from scratch, where you get to choose the technology. I’ve felt the joy of clicking ‘file -> new microservice’ myself: Greenfields! Tech playground! Short term productivity! I can understand this small thing! I’m doing something! I’m making an actual thing that will actually run! It’s easy to see how hype driven development took over and microservices started popping up like mushrooms.
So, now in our software architecture narrative we have the villain, the monolith, and the hero, the microservice. This is the problem in a nutshell. Because of this size binary, developers now think of big as bad, and small as good, and this is not so. One size does not fit all. I recently witnessed a production microservice that was one query on a database. It had only one client. It had been developed like that because of blind faith that small is good, and big is bad. I know that the original proponents of microservice architecture would baulk if they saw this in the wild. I know that Martin Fowler predicted much of the pain that I’ve felt and written about here in his articles discussing microservices. And yet, here we are. We need to make it ok to be big again.
Rebranding the Monolith
From Wikipedia: ‘A monolith is a geological feature consisting of a single massive stone or rock, such as some mountains, or a single large piece of rock placed as, or within, a monument or building. Erosion usually exposes the geological formations, which are often made of very hard and solid igneous or metamorphic rock.’ Now that doesn’t sound like a good thing for software to be to me. It implies that the only way the software can change is through erosion. It is the opposite of agile. However, there are many examples of enormous pieces of software that change constantly — Linux and Facebook immediately spring to mind. I would never refer to Linux as a monolith. We need a new term.
Enter the ‘Megaservice’. Big, bold and beautiful, dynamic and agile, the Megaservice is coded to be easy to change, quick to build, easy to deploy. Made for code reuse it will unlock lost developer productivity! It is modular! It is cared for! It is loved! Tests run across the entire codebase when any change is made, and they complete in seconds! And best of all, the code can scale out to any size that the team in charge of it feels it can handle! It sounds great, doesn’t it?
Sensibly Sized Services
Of course, a focus on building large services is just as problematic as a focus on small. We need to recognize that the right size for a service varies based on what it is being built for. Some should be small, some should be large, and there will be many sizes in between. Making sensible decisions about the size and scope of services is essential for building good software. There are tradeoffs with size that teams need to give more thought to. So, it’s time to discard the idea of the micro vs monolith size binary, and enter the realm of the sensibly sized service. So what is a sensible size? Find out now in part 2!
This blog post was originally published here: Originally published here: https://email@example.com/sensibly-sized-services-part-1-3def0002c48b