Taming your Microservice & Container Envy

It is hard not to be affected by the constant chatter on Microservices Architecture and Container technology. Both are leading the discussions nowadays and they combine to provide new ways to Architect distributed systems and provide agility in delivering business value. While they do bring in big benefits when implemented successfully, the path to success for most enterprises (other than startups/product/tech firms) is going to be difficult and having a level of measured caution would be good.

For those needing further reading on Microservices please go to Martin Fowlers articles on Microservices. Traditional monolithic applications grow over time to become behemoths that require significant cost (time & dollars) to implement the smallest changes. Throw in the risk associated with such large applications and you have a situation that limits the agility/speed with which you can deploy changes to production. A Microservice architecture hopes to reduce the pain by breaking the monolith into smaller independent and loosely coupled services. You could have a single monolith that breaks up into more than a dozen services. This provides tremendous potential to improve time to market with reduced risks. A change to one service should not affect the others, since the services only depend on published contracts which change less often.

What this architecture does introduce is the need for a runtime platform that can manage microservices. Just like the monolith before it, each Microservice needs to be scaled independently and monitored for failures. Your exposure has now gone from managing one application to a dozen services. This is where a runtime microservices platform comes into play. Without one, don’t go to production for all but the smallest low risk services. A microservices architecture will introduce a host of service to service interactions that never existed before in the monolith. Welcome to the world of distributed architecture. With this comes the responsibility to design for failure. A failure in one service can bring down other invoking services; and in some cases can bring the entire system down. Patterns such as Circuit Breaker may need to be implemented to defend against failures and to attempt to heal automatically.

One of the biggest challenges in adopting Microservices architecture will be your existing company culture and way of working. Your culture will be tied to years of tuning to deploy monoliths and often managing these using some manual methods in production. There could be resistance to change from the existing silos. Strong leadership come will be needed to drive the change forward, even if it means getting rid of people or groups that are no longer a fit.

Microservices also require a strong and mature DevOps culture. DevOps is not about creating teams labeled as “DevOps”, that have the danger of creating new silos. DevOps is the culture change that brings together Ops and Dev so that each function is concerned about the other. In the best cases, the developers are also the Ops team. Do not let DevOps become just about tools and not this culture shift. Yes you need a robust CI/CD process and tools. Yes you need to tune them overtime to deliver production value faster and safely using Continuous Delivery & Deployment. But don’t loose focus on building the DevOps around people.

Container tech is another much talked about area. Docker rules the infosphere in this space, but there are other container technologies too (like rkt from coreos). Docker containers (which I will focus on) should not be seen as simply a lighter virtualization method. It is a bit of that, but that is not the core purpose IMO.

Docker brings with it a new way to package an application and all of its dependencies into a container image that can then be shared and instantiated as container instances during runtime

A developer building the application knows it the best. He/She puts everything needed to run an application or service into this package. The same package then gets deployed into any environment (through production). No more re-builds or guessing which version got deployed to which environment. The lighter weight of containers means they can instantiate much faster from zero to running mode as compared to a Virtual Machine. Whereas a VM sits on top of a Hypervisor and each VM runs a full copy of an OS (any OS); the container runs directly on the same host sharing the same kernel but in its own isolated process space. Pause here a bit. This approach of packaging an application can be very disruptive for most enterprises and very foreign. I would recommend letting your developers innovate on smaller workloads and get familiar with basic Docker use before going too far. Strong leadership will be needed to communicate the longer term strategy and benefits.

So now lets take our Microservice and package that into a Docker image. Cools. Now lets deploy that. Wait. I need….(now my rant begins)

I need many instances of my container (and therefore my Microservice) to provide optimal response time and transaction throughput. I need these instances to be deployed together as a logical group. I need some way to monitor the health of these instances and ensure that a given set number of instances are always running. Given that I may deploy this on a public cloud infrastructure the docker instances and the underlying VM’s could go away anytime. So I need something to watch and bring back to life dead instances. I also need something to watch and add instances if I have a sudden spike in transaction volume. I need a central configuration management so that i can change config values in one place and all my instances pick the changes automatically. I need a dynamic ways of service registration and discovery. I need metrics collection. I need way to aggregate my logs from all my services into a central location for production debugging. My containers may need to move automatically if the underlying host will undergo planned upgrades or is becoming unresponsive. I need…and the list can go on.

What we need now is a robust runtime Container Management platform. Microservices required a similar runtime that focuses on the service deployed. Containers require one from the point of view of managing the running container instances. So you can now see the need for both. Without an appropriate platform (home grown or open source or vendor product) you once again should not deploy critical business apps to production.

So we hear all this and feel envious and what it use it NOW. Marry your excitement with some caution but move ahead. From the development team perspective one needs to understand microservices and container tech very well. Play with it using non-production volumes. Drive towards implementing a production prototype for non-critical function. This prototype should be implemented knowing that we will be impacting the way we deploy, manage and monitor applications. Understand the culture changes, the DevOps impacts and the technical choices to be made. The latter itself can be challenging. The runtime management platform will most likely be a PaaS platform. While home grown PaaS may seem attractive initially, they will not scale for large complex enterprises (that are not tech focused like a google). Kubernetes, Mesos, Docker Swarm, CloudFoundry are just a few solutions. With these you will be treating your data center as single pool of compute resources. This means big changes for your infrastructure management teams which may be used to physical machines or Virtual Machines. The idea of dynamically moving, resilient and self-healing work loads on top of data center OS can be new to many. Finally you need to design your applications for failure (using patterns such as Circuit Breaker). I would recommend reading Building Microservices by Sam Newman.

These are exciting areas and we should work towards adopting them where appropriate. But tread with caution and realize it will take more than creating one microservice inside a docker package to call it SUCCESS. Tame your excitement to ensure whether you even need all of this. There may be times a monolith may just be fine. You already have that running and it is fairly small or does not need much change. Or there are workloads for which these don’t not make any sense. Then there is chatter on Serverless Architecture (I am not a fan of the name but fine whatever) and the recent chatter on unikernals. Don’t let these add to your already envious state of mind. They will evolve and get pulled into the PaaS platforms over time.


One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.