I was very honored to be able to attend the ProgNET 2019 event, which has literally just finished, thanks to Redgate and the ProgNET organizers.
One of the key takeaways from the talks/workshops I went to (wish I could go to all of them) is that we no longer live in a simple world where software companies are built on top of business models where those models are implemented by a single stack of technologies. With the wide range of technologies readily available out there, big monolith architecture is being gradually replaced by microservices.
What’s wrong with monoliths?
I personally have a really good argument for it — it solved a lot of business problems, and it still does. Well, why doesn’t the world just carry on in this way? Here’s a few reasons why:
- Difficult to do continuous delivery
- All or nothing, your monolith either works or it doesn’t
- Tight coupling between features
- The only solution to scalability problems is to buy more servers
- Possibly down time during deployment
- Heavily violates single responsibility principle, code reuse becomes minimal
To get to the point, the real problem is that although monolith architecture solves problem, it doesn’t solve problems in the most efficient manner. Humans invented the most beautiful, fundamental sorting algorithms that make machines fast and efficient, why let your architecture get in the way to efficiency?
How does microservice help with CI/CD?
Roy Osherove did a really good talk on Pipeline Driven Organization and had an emphasis on what true continuous delivery is. In short, you’ve achieved true continuous delivery if you can deploy with confidence when your build server goes green. In your build pipeline, you may have:
- Tests — including unit tests, integration tests, code inspections etc.
Apologies, I don’t have the expertise to list things in the other 3 categories, but basically you can put whatever your business cares about into the pipeline. Roy called those delegates. With micro-services, you can enforce different delegates and deploy services individually at different times, which makes CD much easier to achieve.
What is a microservice?
Taking a step back, Ian Cooper’s workshop on Event Carried State Transfer gives a good definition of what a microservice is. It has (interpreted by me):
- well defined interface
- concealed implementation
- autonomy — can be deployed however and whenever, independent of other parts of the system
- explicit boundary
- single responsibility
Having those properties to a microservice makes the code more testable, reusable. Having a microservice architecture also means it’s easier to identify where the availability demands are within your system, making it easier and cheaper for the business to scale. Another benefit is that if a service dies in production, it doesn’t bring down your entire system.
Road map to microservices
Michele Bustamante’s talk on Surviving Microservices laid out the road map to achieving the MVP for microservices. In essence, the measurements on this transformation are — Does it need to be good? Is it fast? Is it cheap? The answers are yes, no and no respectively. Despite massive amount of tools out there available for this architecture, it is a little more than just applying them and it all works out. It won’t. It is a skillset/mindset shift for the engineers. It needs business backing.
There were also a few talks/workshops in the conference that zooms in onto tools which could facilitate transforming your business to microservices. Those tools include:
- Istio for building control plane
- Kubernetes for container orchestration
- Docker for containerization
- Envoy for building the sidecar pattern
- Various messaging frameworks, such as Akka.NET, RabbitMQ
Overall, the conference had a lot of content. I mostly went to the ones which had the theme of microservices. In the end, to microservice or not to microservice, that is a question — to be determined by whether it will meet your business needs. It will be a slow process with steep learning curve for software engineers.