Containerization: Build Your Own Business Case

Eric Herness
AI+ Enterprise Engineering
9 min readSep 3, 2019

You might be surprised that I am talking about business cases. However, as containers take to the mainstream as way to deploy and manage software, it is time to weigh in on this business case topic. I’ll cover the key dimensions of a business case, make a bunch of assumptions and show some basic approaches that we use, in the Cloud Engagement Hub, to help clients select the right modernization approaches to apply against each individual application or workload in their estate.

In an earlier blog entry, I laid out some of the modernization approaches related to containers. Specifically, for this discussion, we will use containerization, repackaging and refactoring. As further background, remember also that we are architecturally going in the direction of cloud native. The question is not where we are heading, but rather how far do we go in that direction on an application by application basis. That is where the business case comes in.

Whenever one builds a business case, there are costs and then there are benefits. The costs of the modernization approaches which might be leveraged are:

  1. Containerization — This is usually the least costly model. This is about getting the current application running in a container as much ‘as is’ as can be achieved. In many cases this means no source code changes. In others, because the source code might be written to earlier specifications or earlier releases of middleware or application servers, technical debt related to currency might be part of the cost as well. If you are involved in projects working on currency and hygiene items, that is also a way to inject the container conversation into the mix, as moving once is always better than twice. Specifically, upgrading to a new version of the software that is not container based and then later containerizing is two moves and that’s seldom if ever cheaper.
  2. Repackaging — This is an approach that not only containerizes, but also breaks monoliths into natural pieces. For example, this means taking a monolithic package of servlets or EJBs and breaking them into multiple packages. Similar repackaging examples can be found in the integration space as well as almost any other applications or middleware that supports applications.
  3. Refactoring — This is an approach that gets much deeper into source code restructuring. This often involves taking pieces at a time, leveraging various strangling patterns and ends in well-formed cloud native application that is almost the same as if you had started from scratch.

All of these techniques should be leveraged with a landing zone that includes a Kubernetes platform, namely OpenShift often accompanied by IBM’s cloud paks. Running production level containers anywhere but Kubernetes makes little or no sense in today’s world. There is the cost of having a Kubernetes platform upon which to run the resulting applications. We will get back to that when infrastructure and operations are detailed, later, after a deeper dive into benefits.

The benefits are what we are after in this whole matter. Perhaps they should have been covered first. The benefits of each approach also grow as you go down the list from containerization, to repackaging to refactoring. What are the categories of benefits that are worth measuring and thinking through?

Containerization

  1. Provisioning time — Standing up a new environment to test or go into production in new regions will be faster in the presence of containers. If you are using a containerized approach to stand up something like MQ or DataPower, this can be done in minutes without requisitioning a VM, installing and configuring the software on it all before then getting back to the business of testing. Additional value accrues because the container model allows each development team to quickly provision and then have their own instance of any middleware they need as part of solution. These instances do not necessarily live for long periods of time and thus the infrastructure they run on can be reused over and over. Development teams are more likely to run extra tests and thus deliver higher quality new capabilities.
  2. Deployment — For application and integration runtimes that are on the ‘cattle’ model, Dev Ops tools chains can now essentially provision the runtimes as part of deploying a solution. There is no ‘installing’ here beyond the standing up of a container as part of deploying the application. Again, the benefits come from consistency via automation as well as the inherent speed at which this can be done.
  3. Scaling Speed — In most cases, adding capacity by provisioning another container is also a speed advantage. Some care must be taken with regard to the state management and exposing of ingress, which might require a little more tweaking, but we will take credit for this benefit under containerization. This can be done automatically as part of leveraging the underlying Kubernetes platform.
  4. Resiliency — Using some of the same Kubernetes mechanisms that are leveraged for scaling, there is a level of resiliency inherent in containerizing. Having multiple pods in a deployment is how this starts. Failures of an individual pod cause Kubernetes to start another pod and keep the business capability available at the requested capacity.
  5. Maintenance — You don’t patch containers, you provision new ones that have updated software built into them via Docker.

Repackaging

All of the containerization benefits apply. The additional benefits here are:

  1. Scaling Granularity — Now one can scale up specific components of that former monolith, which can have a further speed improvement and infrastructure cost reduction. Many monoliths have hot spots within them. By breaking apart the monolith, it is almost always the case that some parts of the repackaged application will need to scale more than others. To really get this to work robustly, various observability approaches will need to be supported. These are pretty easy to add via Kubernetes metadata (yaml files).
  2. Testing time and cost — Now that there are smaller pieces, business logic changes can likely involve fewer regression tests that can be performed much more quickly. Of course we still enjoy speed and automation for provisioning environments as well.

Refactoring

All of the repackaging benefits apply. The additional benefits here are:

  1. Stability and Robustness — The story here is that there is full on access to the advantages of cloud native. It is easier to get to all of the 12 factors of cloud native. Circuit breakers, retry logic, and elaborate observability all come with this approach.
  2. Extensibility — It is easier to add new function when refactoring has occurred, in terms of the actual understanding of the code and making changes to it for enriching to get new value or externalizing the value that is already there.
  3. Even More Granular Scalability — The most granular scalability is achievable under this model.

The above benefits do not represent an exhaustive list. In fact, we didn’t even get into the opportunity that comes with containers to run a service mesh, such as Istio, which could be another blog entry onto itself.

Let’s now move on to the investment in labor necessary to arrive at a containerized, repackaged or refactored application. If we take a naive view, let’s say you have the ability to get some extra resources to get your team because you cannot slow down the delivery of new function while modernization takes place. Containerization will take the least amount of invest, with repackage next and refactoring taking the most to accomplish.

A picture like this can be used to explain or show what can happen if you start applying these techniques.

First, the red line shows the cost to deliver an ongoing stream of new business capabilities. This is implying that the monoliths are likely to keep growing, so the net cost is growing too per unit of function delivered.

The blue line, to the left of t1 shows the cost of containerizing plus the ongoing delivery of new business capabilities. To the right of t1 you can see that the cost of delivering the same amount of new business capabilities is less because we’ve enjoyed the benefits I outlined before. To be clear, after t1, take red line and subtract from it the value of the blue line and see the value of containerization.

Similarly, repackaging and refactoring have higher initial costs, but result in lower ongoing costs to deliver the same set of new business capabilities. This means red minus purple or red minus brown to be more precise.

The figure above is focused on a single application. Those incremental investments needed for containerization, repackaging or refactoring will likely become less for similar applications that are attacked after the first application. Those subsequent efforts can build up on the experiences and automation approaches built out for the early modernization efforts.

In the real world, you may not always get a temporary set of new team members, although IBM can provide some of those for you, if you’re interested. Further, your team will probably deliver more function in total than before, rather than shrinking to match the curves above. All that said, hopefully this figure is illustrative. You probably want the version of this that shows your existing team being way more productive. You can do this by adjusting the y-axis. I’ve done that here just as an example.

Most of the reduced ongoing development cost for an invariant set of delivered business capabilities really comes from improved velocity. This is enabled by the containerization techniques chosen, but must be complemented by the necessary DevOps automation and cultural adjustments attributed to successful cloud native projects.

Characterizing the value of increased velocity is something that your business sponsors will have to help you figure out. They are the ones wanting all these new business capabilities and should be able to run fancy spreadsheets with net present value calculations to show why having these capabilities sooner creates a more competitive business.

Let’s now go back to infrastructure costs, for a minute. This can sometimes be a savings opportunity as well, but not always. There is a fixed cost of having Kubernetes clusters, like OpenShift above and beyond whatever your current IaaS happens to be. There is also a set of management operations related to managing the OpenShift environments that doesn’t exist in existing traditional environments. All this velocity is not free.

However, from an infrastructure perspective, it is also the case that existing workloads might be overprovisioned in production and there might be ongoing utilization rates that are not that high. Test environments that go unused most of the time in the old model also are consuming infrastructure. These realities can represent savings by moving to containers. It is often the case that this overprovisioning that can go away now counts up enough to cover the fixed costs of Kubernetes environments. If you’ve tuned your IaaS to run at very high utilizations, then this part of the business case will be more challenging. I’ll put in one more plug here for the more granular scaling model afforded by repackaging, and how that can really reduce resource consumption.

From an operations perspective, what is certain is that there is a change. You will have to measure whether operations costs go down or not, given your situation. One point is around skills. Kubernetes becomes a common control plane for managing a variety of different capabilities whereas in the past model, each middleware capability had some unique actions, and thus required unique skills for dealing with operational tasks related to things like alerts, monitoring, logging, user administration and security management, just to name a few.

The bottom line is that the business case for containerization is based on improved velocity with a likely contribution in terms of infrastructure savings. Operational costs should also be explored for potential savings due to the common control plane introduced by Kubernetes. Each application or workload should be evaluated and scheduled for modernization using one or more of the techniques described above.

Future blog entries will go deeper into some of these topics and get more precise, based on our ongoing experiences. Digging into this notion of how velocity is calculated and the financial value it provides is certainly worth some additional discussion in the context of an overall business case building activity.

--

--