Searching For the Perfect Stack

OpenShift Ninja
Mar 2 · 4 min read


So far on this series, I have talked a lot about layers that most application developers just take for granted. Your processes and routines for writing software. Your infrastructure resources. Your data storage. Sometimes teams reuse the patterns for these layers for every project, but I believe that you should always evaluate the whole stack to determine if some improvement can be made or a change is required for your project. In other words, don’t ignore those layers, even though they are fairly static and don’t change often.

One thing I see far too often in Enterprise Software is teams building every application the same way, using the same baseline frameworks, servers, etc. It makes sense from the standpoint of interacting with middleware engineers who aren’t interested in the details of your application. They deal with a lot of different teams and are mostly focused on setting things up for you. Having to manage specialized configurations for applications can be a huge drain on their productivity. The problem is that this kind of development environment can be incredibly wasteful.

For example, I have worked with teams that used a really heavyweight IBM Websphere Cluster for each application because they are used to having to deal with heavy loads and they need to have additional capacity so that they can deal with spikes in traffic. When you look at the utilization of these servers, it can often be under 20%. That means 80% of the capacity of these servers is being wasted for most of the life of the application.

That is money coming out of your team’s budget. Money that could be instead used to hire more people or send people to conferences or license better tools. This is why there has been such a drive to move everyone off of baremetal servers over to VMs — with the VMs, you can load up your servers better and get better utilization out of them.

In a lot of ways, using VMs is like defragmenting your hard drive. Right now, on your hard drive there are thousands or even millions of files that have all been allocated some space. As files are created and deleted, gaps start to appear between the files, which means that you have space that could get wasted if files are too big to go in the gaps. You can defragment your hard drive by moving the files around and consolidating the gaps. Ideally, files that are related are placed together so that disk reads easily pick up all the related files in one read.

The difference, though, is that you still need to keep extra capacity available to your VMs for those spikes in traffic. So although you have closed some of the gaps by bringing multiple VMs onto a server, you still have wasted capacity. I talked about this problem earlier in this series when talking about infrastructure, and the reason I am bringing it up again is that middleware is often the biggest consumer of your infrastructure capacity.

Your middleware servers are often large because they run really large applications. Applications that require thousands of threads and many gigabytes of RAM and large terabyte drives mounted into the VMs. Deployments are often hundreds of megabytes if not gigabytes in size, and can take many hours to complete. If one of these servers goes down, the hit in capacity can cripple an application, so you have to have other hardware sitting idle close by or in another datacenter ready to go.

There is a better way, and if you have been reading along this far, you already know what I’m going to say. Microservices. Containers. Serverless. These are the methods that you need to adopt if you really want to scale big but make writing, building, testing, deploying, and using your applications much easier. It will also accelerate your releases so that you can address issues and feature requests from your customers much faster.

Imagine being able to deliver a feature your customer has requested later in the same day that they requested it. “Can you change this modal to have an additional button that does this?” “Sure thing. We’ll have it up in two hours”. Git clone, write some quick React Testing Library tests, implement the React to deliver the functionality, commit. Jenkins kicks off a build which is tested, scanned, deployed to Artifactory. Now you just need to update your image tag on production and then boom! Deployed. You can do it while customers are still hitting your application!

That’s right folks. Zero downtime, fast delivery, and happy customers that see you are able to be agile and deliver functionality quickly. It really is possible to move away from the old middleware nightmares and land in a world where managing your applications can be done so much better. Need more capacity? Just spin up more containers. Use the cloud if you don’t have it locally. Flexibility is your new middle name.


I’ve talked a lot of about different layers that go into building applications, and one of the most cost-intensive ones is your middleware. These components are often large servers requiring lots of capacity and redundancy, and are often under utilized. Even converting to use VMs instead of baremetal doesn’t entirely solve the problem. Instead, you need to change how you implement these middleware services and move to small scalable components in containers or serverless. Once you do this, you can better utilize your resources, address customer issues faster, and eliminate costly downtime. This is the future and it is already here today.

Only one more article left in this series, talking about that most important layer — the user interface.

More From Medium

More from OpenShift Ninja

More from OpenShift Ninja

Production Rollout Hell

Also tagged Software Development

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade