Microservices as if you mean it

I should’ve listened to people smarter than me. But then the knowledge would be only theoretical and shallow. And you must be really smart to understand consequences only from theory. That’s not me, I’m a simple programmer and I sometimes have to do stupid things.

I decided to use a microservices architecture in early stage startup. I decided to follow DDD architectural patterns in the very same startup. It didn’t make things slower, it didn’t make this business burn. It just wasn’t easy.

Let’s think about it — how much effort is required to even start thinking about microservices. First things first — architecture, the most important technical decision about the project. How microservices communicate with each other and how they live as a system.

Communication

With a microservices architecture you no longer use in-process communication — the easiest one to maintain. Instead of just calling methods on objects you have to think about sending messages or publishing events with enough data to be actionable. You also can’t redesign the messages / events schema without supporting older versions — because otherwise you would have highly coupled distributed services, the worst creature ever — even comparing to Cthulhu. So: you have to both use a protocol and version it to introduce microservices. On a new product and new team without much knowledge about domain you may waste too much time redesigning the messages and flow to fulfil business needs.

Platform

You definitely don’t want to make each microservice infrastructural snowflake — hard to redeploy on different instance (after upgrade, disaster etc.). That’s why you need to make deployments and instance provisioning reproducible — with automation tools. Minimum for this would be instantiating microservice process on instance, but it would probably be not enough because of dependencies problem. Nowadays the most popular solution is to use containerisation with some container orchestration platforms — like Kubernetes or Docker Swarm. They take care not only for process lifecycle, but also may cover service discovery, self-healing and scaling.

But maybe you don’t want to put all your eggs into one basket and use those platform only for the basics. Then you’ll need other microservices to handle those needs. First — service-discovery might be handled using Consul. Self-healing — also Consul.

But that’s not enough — we would only have army of mute microservices, without any communication between them. You have two choices here — use HTTP or some message bus like Kafka or RabbitMQ. If you decide on HTTP you may support both sync and async communication, while message bus is async only — you may achieve some kind of sync using them, but it’s not in their nature. Of course you can use more than one channel to distribute your messages / events if it does make sense. But then you have to support more than one (meta)protocol.

When it’s worth considering HTTP as message channel? If you want to follow front-end-backend microservices with a composer in front of them than you need sync communication to make it happen.

Persistence

Each of microservices you have distinct persistence (unless it doesn’t) — maybe not the whole database system, but at least database. With such separation you may feel tempted to introduce new kinds of databases, that would support given microservice’s problem specifics — because it’s just next microservice, right? This is the hardest part to me, because data is most important asset you store in your system and it should be maintained with great care. So you have to really know how no to lose any of those databases, how to orchestrate and scale them the best way. This adds more complexity to your organization knowledge, so adds another cost in total.

Organization

Speaking about the organization — if you have one team taking care of all microservices you can have hard times. If you have 3–5 microservices it’s quite easy to understand them and react to any problems. But you won’t have that few. Only the infra microservices will have ~10 members — we’ll talk about it later. You may try to resolve this problem with ownership distributed between developers — 2 developers per microservice with at most 5–7 microservices per developer, but then you end up with highly specialized team members and probably not evenly distributed workforce — unless you have many product owners dedicated to each of microservice.

In bigger organizations new problem shows up — knowledge sharing. Of course this is not microservice-specific, but it’s easier for big organization to duplicate already existing tools. That’s why you’ll probably need some microservices directory, some taxonomy so it make it easy to find proper tool.

But larger organizations can really benefit from microservices architecture because of real independence of the team. Each team can work on it’s own set of microservices (taking their ownership) that would support their business goals. This way your product may end up as set of composed micro-products with business units created around each of them.

Separation

How do you know how to split your world into microservices? I think there are 2 factors you have to consider while extracting microservices — problem domain (in DDD meaning) and computation constraints.

Problem domain

Your organizations activities may live in totally different world — some of them may be about accounting, some about warehousing, analytics, etc. There’s also infra level that speaks totally different language, comparing to business problems. Each of those languages splits organization’s problem domain into problem subdomains. It feels super natural to have microservices for each of those languages — we call such subsystems bounded contexts. Having more than one microservice per such subdomain may be painful, because subdomain’s internal communication is usually more fluid and happens more often.

Computation constraints

There are some subdomains that require more computational entities than just a single one. Think about crawlers that need both crawler workers and masters that would coordinate them. They can speak same language, but you need separate microservices to deploy many instances of crawlers and only a few of their masters.

Infra

We talked a bit about the platform you’ll need to deploy your microservices, but that’s not enough. You need a fully automated integration and delivery environment with post-deployment test that will make your deployment process bulletproof. By fully automated I mean the setup built into the microservice’s repository being autodetected by integration system. Think of TravisCI for example — you add one file with details about tested system and it’s ready to go. Also deployment setup should be that easy. Otherwise you’ll end up with endless configuration of newly created microservices — so instead of getting more flexibility you would have much more configuration to maintain. This also means standarization of microservices integration and delivery, so it’s part of the platform.

Another thing you cannot miss is monitoring with self-healing capabilities. Each microservice should define it’s own KPIs and also track other important metrics, so you can easily and automatically discover irregularities — to react as soon as possible, probably sometimes even automatically with only notification about problem and mitigation method. One of the tools that may help you is ELK-stack (ElasticSearch, LogStash and Kibana).

Monolith

The most common scenario for microservices architecture is introducing it after some monolithic system is too big to maintain — you have too many developers commiting to single repo, tests too slow to support continuous deployment without pain etc. But then you probably don’t have “pure” microservices architecture, rather mix of monolith providing solutions for many subdomains and few microservices behind an anticorruption layer that handles new business cases. You may aim to rewrite the whole system to microservices one day, but it probably won’t happen. And that’s totally fine — you get flexibility where you need it and you don’t waste too much time on maintenance of the old system.

So what?

I wanted to show you how much effort is required when you go with a microservices architecture. Some of those points are valid also for monolithic systems, but in microservices you get always new factor — making given process generic enough to easily automate it for all microservices — otherwise you will miss it and you’ll be hurt. I wouldn’t recommend the architecture to startups, except those with really strange computation needs hard to be supported by monoliths. If you want to introduce the microservices architecture to your project make your provisioning and deployment processes fully automated first, talk to different teams in your organization to understand what are the subdomains in it and how you can split your system into meaningful bounded contexts, try also to define bounded contexts using in-process patterns, like modules, so you’ll see how dynamic changes in communication points may be. And then introduce microservice.

Smarter people:

Building microservices (Newman)

The DevOps 2.0 Toolkit (Farcic)

Microservices (Fowler)