How can we get from a monolith to micro-services quickly?
Can’t answer that question. First, “quickly” is right out the window. You didn’t make this mess in a month; you’re not going to fix it in a month. Second, you want some benefit you aren’t currently getting that you expect from micro-services. What is that benefit? Micro-services aren’t the point.
Having rejected the question, I will go ahead and answer it. Before I can explain why a quick change to micro-services is impossible and, if forced, dangerous, here are the basic forces acting on software design. These were elucidated in the mid-70s by Yourdon & Constantine in Structured Design and haven’t changed.
Their argument goes like this:
- We design software to reduce its cost.
- The cost of software is ≈ the cost of changing the software.
- The cost of changing the software is ≈ the cost of the expensive changes (power laws and all that).
- The cost of the expensive changes is generated by cascading changes — if I change this then I have to change that and that, and if I change that then…
- Coupling between elements of a design is this propensity for a change to propagate.
- So, design ≈ cost ≈ change ≈ big change ≈ coupling. Transitively, software design ≈ managing coupling.
(This skips loads of interesting stuff, but I’m just trying to set up the argument for why rapid decomposition of a monolith into micro-services is counter-productive.)
Note I don’t say, “Eliminating coupling.” Decoupling comes with its own costs, both the cost of the decoupling itself and the future costs of unanticipated changes. The more perfectly a design is adapted to one set of changes, the more likely it is to be blind-sided by novel changes. And so we have the classic tradeoff curve:
You manage coupling one of two ways:
- Eliminate coupling. A client and server with hard-coded read() and write() functions are coupled with respect to protocol changes. Change a write() and you’ll have to change the read(). Introduce an interface definition language, though, and you can add to the protocol in one place and have the change propagate automatically to read() and write().
- Reduce coupling’s scope. If changing one element implies changing ten others, then it’s better if those elements are together than if they are scattered all over the system —less to navigate, less to examine, less to test. The number of elements to change is the same, but the cost per change is smaller. (This is also known as the “manure in one pile” principle, or less-aromatically “cohesion”.)
Two factors prevent de-coupling quickly:
I see useful folklore around software design — keep your models separate from your views & controllers, for example — but little explicit acknowledging or managing coupling. Once you put on Coupling Glasses you won’t be able to unsee coupling, but the transition takes a while. Identify coupling, look for Emmenthal changes (changes with lots of holes between the cheese), increase cohesion, decrease cohesion in one direction then increase it in another, play with design.
Once you can design in the abstract, apply those skills to your system. Where should your system’s internal boundaries be? It will take a while & some experiments to discover them. Best to sketch the boundaries lightly, then draw them more firmly, before cutting parts out. Sketching mistakes are reversible. Service extraction is not exactly forever, but it’s expensive to reverse when you discover two services are coupled.
How can you quickly decompose a monolith into micro-services? You can’t because you need to learn how and you need to learn what. The good news is that you don’t have to quickly decompose a monolith into micro-services to quickly get some of the benefits you seek.
Change clusters. If you tidy the streets you walk, and you mostly walk the same streets, then a little tidying will have you mostly walking tidy streets. Increase cohesion a little before you make changes. Eliminate coupling a little before you make changes. Pretty soon change will accelerate.