Rebuilding software is inevitable
No matter how good the initial design is it will start degrading once it’s out there. Software design seems to follow the 2nd law of thermodynamics — in a software system the amount of entropy tends to increase over time.
The moment something is released new requirements will start bombarding it. And of course it is always easier to just do the Minimal Change — to simply nudge the system in the right direction instead of thinking how given changes should properly be incorporated.
Also, the people who add stuff will probably not all be the ones who initially built it. Hence they may not be aware of (or care about) all the nuances outside their particular use case. You can only slow this process down by having a Foreman for each piece of code.
Then there is the ever changing world of technologies and tools and ways of doing stuff. Sooner or later you will not be able to befriend the original toolset and the new stuff without becoming a schizophrenic zoo keeper.
Lastly, over time there will be less and less people who built the original system. They have either moved on to other teams or other companies. Then the best way how to regain that knowledge within the (new) team is rebuilding from scratch.
So we should accept the reality that if something we build today will be successful it will be rebuilt. Now the problem is that we have probably added quite a bit of logic over time. Rewriting the whole thing in one go is typically not very easy. What should we do then? Answer: use microservices.
I believe that one of the main advantages of microservices is that they allow us to replace system piece by piece.
If we think of microservices from this perspective then it gives us valuable input to some decisions we have to make. For example, using client libraries or platform specific tools (e.g JMS) to connect different microservices may potentially make such rewriting harder so we should be careful when using them.
Originally posted at: tech.transferwise.com