What is a Future Proof Information Technology Architecture? An Application-Centric View

Eric Herness
Cloud Journey Optimization
14 min readFeb 23, 2024

Introduction

I get asked a lot these days about what it means to create a ‘future proof’ architecture. This occurs for several reasons, not the least of which is that business and information technology (IT) executives are looking for ways to get more innovation and business value from their IT spend, versus the constant pay out required on things like maintenance and modernization. We obviously spend a lot of time in IT these days on the topic of modernization, which might be fancy words for paying off technical debt. While I like modernization, especially application modernization, I agree that putting more money on innovation that creates business value is top of mind in the C-suite. It should be. And it doesn’t matter what it is called.

This blog will cover some of my perspective on future proof architectures. Frankly, this is a bit more of a stream of thought than it is something I’ve researched and thought through completely. For the most part, I’m letting my 35+ years of experience in the IT industry be bedrock for my treatise on future proof architecture.

The first section asks the somewhat obvious question of why we need future proof architectures. Next, some broad principles are outlined and described. This is followed by introducing some[ENH2] meatier and more technically backed topics. Then, there’s a quick reality check with some cost considerations. Future proof architectures are not free. This blog entry then ends by cross checking the guidance given against the reasons to consider this topic in general.

Why Future Proof IT Architectures?

Beyond the obvious stated in the introduction, we need future proof architectures so that applications:

1. can be quickly and safely enhanced to respond to new functional business requirements.

2. can be quickly repurposed or reprovisioned to serve new markets that have limited or different available building blocks.

3. can easily integrate and leverage new technologies and services that arise and present opportunity to further accelerate delivery of business value.

4. can scale, up and down, as market demand evolves.

5. can easily embrace and support new compliance and security requirements presented by evolving markets or different markets that are to be served.

If we look at the above list differently, it turns into the list of requirements a future proof architecture must meet. These are only some of the requirements. The playing field and thus requirements are constantly changing. New technology and new business requirements together, both influence the future proof architecture story.

Applications are the Starting Point

Yes, this is about enabling applications to be future proof. Applications need to respond quickly to the requirements laid out in the prior section. Thus, this isn’t necessarily about making infrastructure platforms generically future proof. That is the job of the cloud providers and others who intend to be IaaS/PaaS providers. If you work for a cloud provider or other platform provider, I will assert the future proofing task is harder. For everyone else, start with the applications. This notion onto itself might be a bit controversial in some circles, but that is my position.

Historically, there are applications out there that have run on multiple different bare metal operating systems, then on VMs and now in Kubernetes-based containers both on-premises and on various public clouds. In my decades working on WebSphere, I saw java and open standards-based application frameworks enable this future proof quality. Some of those old hardware platforms are long gone, while the applications chug away. Sure, there was technical debt to pay off as java and those frameworks evolved, but for the most part, it was a great way to get early access to new capabilities and more scalable architectures while minimizing the technical debt.

There is enough here to contemplate when taking this application-centric view. In some ways, the IaaS and the PaaS become highly reliable commodity components, that are necessary but not sufficient to declare that you have a future proof architecture. The discussion becomes more about how the applications leverage those components. Maybe I should have named this discussion something more linked to future proof applications guidelines? Maybe there is another blog entry on future proofing a PaaS/IaaS platform? But do not despair, the following sections will get quite precise on some PaaS topics, and we can argue later about whether they are application future proofing guidance or apply to PaaS services.

A first note on custom applications versus packaged applications — If you work directly with or for a large organization that has the classical set of applications that support the business, you are probably dealing with some combination of application types. Some are custom-built applications while the rest are either packaged applications, SaaS based business capabilities, or a blend of both packaged applications and SaaS based business capabilities. Most of what is described in this blog entry is targeted at those building custom applications. However, some of the topics do apply equally well to software architectures underpinning packaged applications or SaaS based business services.

To get this core topic properly addressed, the next section lays out some principles to follow when constructing a future proof architecture, while the section after that more prescriptively lays out some next level technical topics that ultimately meet the requirements of future proof architectures laid out above.

Principles

Principles, in this context are guiding lights and directions, more so than specific technical elements. In fact, as you will see some of these are even a bit distant from the technical. The foundational principles are:

1. Agile development

2. Culture of Automation and Self-Service

3. Open Source (community, supported distribution)

4. Buy versus Build

5. Conservation of Moving Parts

6. Evolution towards “Shared Nothing”

7. SRE and XLA Philosophy

Agile development

Embracing agile development is part of future proofing. For the most part, make this a principle that helps with future proofing. There is autonomy and a can-do perspective that comes with this approach. This approach helps promote a culture of automation as well.

One might also concede that this approach offers too much autonomy to developers and might risk individual teams making decisions that increase technical debt in the long-term in the spirit of getting a new feature delivered. That is something to watch for. However, a good agile approach will have platform teams or enabling teams that should be helping provide services and explore new capabilities.

Culture of Automation and Self-Service

A culture of automation encourages developers to consume capabilities which are easily provisioned and de-provisioned. The act of automating will question special configuration settings and just won’t tolerate extra manual configurations. A platform team that is assisting or advising on automation will be the gearbox to adjudicate differences and ensure the valid nuances are truly warranted. Automation also makes testing easier and thus enables and encourages more testing, which leads to better software quality.

The most valuable automations are those consumed in a self-service persona sensitive way. Other automations might be automatically triggered when needed. Don’t think of an automation as something you always need to have someone else run for you.

Another angle on future proofing that comes in here is that of currency and technical debt. With the right automation philosophy and tools to make it real, this can enable more constant updating of various elements of the architecture. Fewer changes per update and more updates keeps technical debt paid down. Underneath of this means automation that includes blue/green testing, rollback, and a strong suite of automated testcases.

The leading edge of this is immutability, of the application components and the infrastructure services upon which they depend.

Open Source

Leverage open source. Leverage open-source APIs and frameworks that have multiple committers and large, vibrant communities as well as supported distributions. Think ahead to where a given open-source project might be in 5 years. Java enjoyed and still enjoys innovation, investment, and community as well as supported distributions. Kubernetes has similar excitement and activity. We all know of projects that died out as well.

Open Source and using the right open source also will bring with it a set of available skills. Look for open-source technologies that have multiple active distributions. Look for those that appear on university and technical school curricula.

Buy versus Build

Buy application runtimes, buy middleware, and buy databases. Buy them ‘as a service’ whenever possible. If you do this and you then allow the service provider to curate the evolution, this is normally goodness. If you are building your own Kubernetes stack, your own database stack, by starting with the open source, that’s a future-proofing anti-pattern. This RYO stuff will become a source of technical debt. I’ve had IT folks tell me over the years why they should build their own CORBA ORBs, application servers, ESBs and workflow engines. All of them built up debt and later found it very challenging to do innovative things because of the operating costs they were incurring. The applications teams often then went shadow IT and there was no more future proofing going on.

Buy applications and buy them as SaaS properties especially for those capabilities that are necessary, but not differentiating for your business. While that sounds right, make those buy decisions in the context of how future-proof ready those SaaS properties are. Are they configured with standards-based mechanisms. Are they able to plug into your logging, monitoring, events, and other platform services, in standard ways. If yes, you’re ensuring some level of future proofing.

Think ahead to a world where there is a larger portion of the application portfolio that is packaged versus custom. If you see silos emerging in that future look, then keep reading and update the criteria and context used to do the buying. Don’t resist the buy, do it right.

Conservation of Moving Parts

If you are leveraging a java-based runtime, select just one or two for good reasons. Do something similar with databases capabilities and other integration capabilities. Too many moving parts and too many choices within the same categories increases the surface area of your overall architecture. Again, choice is good, but within reason. How many relational database services do you really need in the cloud? Pick a couple, put usage guidance in place and hurry onto NoSQL databases, document databases, in-memory caches and the other things that are needed in the data services space.

Evolution towards “Shared Nothing”

Build new custom applications with the right amount of architecture for scaling. The cloud affords almost unlimited scaling if ‘shared nothing’ architecture is pursued. Realize that this will require a bit more programming, but sans some maintainability challenges due to code complexity, architectures that minimize locking, leverage nearly consistent semantics when practical and publish changes for others to consume will be the best way to future proof an application for scaling.

Pulling this off does require some prescriptive guidance in the form of programming patterns (event driven, orchestration, choreography, etc..), starter projects and other aids which keep the complexity to a minimum.

SRE and XLA Philosophy

An SRE model is essential to ensure that applications start and remain available, resilient, and performant throughout their life cycle. Among the many values of an SRE philosophy is that of ensuring encountered problems do not occur again. This means ongoing curation by SREs. Make sure the SRE remit includes remediation that is built right into the application. While runbooks are fine for some things, having applications that themselves are more resilient should be pursued in many cases. I’ve never been a fan of runbooks that reboot or recycle runtimes while never really addressing root causes. I guess that comes from my many years working in operating systems and middleware development. You must get the gremlins out or they will come back later when the stakes are higher.

XLAs bring another important dimension to an overall philosophy. This isn’t necessarily an architectural principle for future proofing, but it is a way of measuring success that will drive changes into the applications. Those changes are for the better and will allow an application to contribute to the business for longer time periods.

A Few More Technical Topics

Maybe you are now wondering if this article will get deep enough to make a difference. In some ways, what has preceded this section is the setup. Now, onto the next level of detail for a couple of topics. Again, I’m seeing more blogs in the future here. For now, let’s start with some important themes that influence futureproofing.

Microservices, Containers and Serverless

Choose a microservices and serverless based approach for your business logic. Future proofing business logic is all about getting it in consumable chunks. Error on the side of granularity to begin with, and back off from that as you look at performance and places where looser coupling is just a dream. Future proofing means thinking not only about loose coupling from a code perspective, but also from a management and scaling perspective. Monoliths are not cheap to scale.

A microservices approach also lets us talk about polyglot in a realistic way. We had this back in the CORBA days. In the java stampede, we gave that up, and for a while didn’t miss it much. In today’s world, there are too many good languages that do a certain class of work well. This means you can not only switch languages, but you can try different distributions of application runtimes in the same language. I think we’ve used 6 or 7 versions of java application runtimes over the years in our Stock Trader example.

A future proof architecture will also need an underlying Kubernetes environment that is ready to handle the applications. This is done by judiciously leveraging service meshes and other custom resources (CRDs) to help those microservices have the purest business logic in the source code as possible. Leave routing, encryption, and many other things to occur as part of the platform. Further, have the microservices specify declarative metadata (YAML) that allows the platform to practically do things like scaling, scheduling, and other management tasks.

Application Portability

This is where things start to get a bit trickier. We’ve outlined principles related to ‘Open Source’, ‘Conversation of Moving Parts’ and ‘Buy versus Build’, not to mention ending up with a lot of automation to support it call because of the ‘Culture of Automation’ Principle. Now ask yourself if you want your application to be truly portable across public cloud providers.

I get reactions to this question that are all over the map. Some say that they are only on one cloud, so why does portability matter. I can tell you that cloud providers do compete, so I don’t need to say more on that one.

Others tell me that they want portability. That’s usually before we get too deep into the weeds of reality. Some elements of portability are cheap to achieve and come via principles and selections of services. Avoiding container-platform-specific capabilities and sticking to the open-source principle described earlier is obvious. This makes the source code portable, as well as the YAML that goes with it. Other elements take on more interesting notes. Data services and serverless come to mind. With databases, preferring services that are available on all cloud-providers is a good plan. Dealing with those you need that are not widely available in with the right qualities of service is going to be costly. However, even for those you need to do special things to support, stick to those that are open-source based, have large communities and commercial distributions.

This topic deserves more blog entries, some of which we’ve got on the drawing board.

API Based Business Services and Exposing Data

I would be remiss to not mention building API based services when it comes to future proofing. Equally important is an architecture that allows for addition of APIs as an application evolves. It is okay to have a lot of APIs, if there is a good underlying data architecture. This should be one that has exposed data which is not dependent on or directly reflective of the storage approach for that data.

APIs are not just REST, either, in this modern world. Remember our discussion earlier about loosely coupled. This means that events are part of the API. This means that topics of published events become part of the API as well.

Cost Considerations

Most of the principles and some of the more technical topics are not costly to embrace. Others have implications. For example, how far you go towards an event driven or a CQRS type of application architecture comes with a cost. Programming asynchronously is going to mean more code, more complex code and thus the need for a richer skill mix of developers.

Application portability sounds good. Leveraging Kubernetes and leveraging a set of databases that are available on all public clouds and in private clouds is obviously a good idea. In other cases, having to purchase, and manage software that might not be in one or more of the cloud provider’s catalogs places a financial (licenses) and operational burden that likely exceeds that of leveraging a cloud provider managed service. In some cases, a particular cloud provider might have a superior and unique service. Leveraging that can make developers more productive but reduces application portability. Think all this through before deciding exactly how portable you want your applications. Mandates of 100% portability might be too extreme in a few cases.

Availability is another interesting topic when thinking about future proofing which has a direct bearing on cost. Applications, for the most part, should be able to run successfully, supporting practical SLAs and XLAs in a variety of configurations. If the minimum footprint is too high, that application might not be as future proofed as you think. It is perhaps subtle but is important. For example, don’t build applications that depend on out of region DR via automated replication. Make sure there’s a less aggressive backup capability that can be configured.

Wrapping Up

With the principles now laid out and some more technical elements specified, let’s look at how this all can come together check our work. If we take some of the typical requirement’s categories outlined earlier and see how the principles might apply, this will at least validate some of the suggestions and topics we have covered.

Anytime there are new functional requirements, the effectiveness of an automation capability for provisioning environments that facilitate development and test, quickly is pressure tested. Getting a CI/CD pipeline executed quickly, without letting any specific checks and balances slip through the cracks is also important. Then, having the ability to leverage the right technology to best address that new business requirement and deploy it into a tested, scalable, and secure platform puts developers in the right place to meet those new requirements.

New markets to be served via channels or geographic expansion is the next need to be considered. Here, one expects near seamless provisioning into new instances for development, test, and production. Count on cloud provider portability to enable the applications to easily land on whatever platforms are available.

New technologies should be able to be bolted into a solution quickly and safely. This is in part to leveraging an API Manager. Expect these new technologies to independently scale and be quickly leveraged using the buy versus build principle. The automation platform must enable provisioning new capabilities.

Market dynamics and consumer behaviors require a strong scale up, scale down capability. This must be granular down to individual consumed services and steps in business processes. Proper automation and leveraging of containers and cloud provider based managed services are the dimensions to count on here.

Finally, a change in security and compliance posture necessitates that the principles and dimensions laid out above pay off in many forms. Automation in the form of CI/CD enables quick changes to application scanning and the rules applied. Conservation of moving parts is a principle that reduces the surface area which must be addressed. A microservices model can isolate some of the required changes and limit impact.

Take some of your business requirements and pressure test the above principles and technical topics that I think are important from a future proofing perspective. I am sure you will come up with some more things that should go into these lists. Leverage my list as a seed.

--

--