Ben Pritchett
Hybrid Cloud How-tos
5 min readSep 26, 2022

--

Photo by Suzanne D. Williams on Unsplash

Change is inevitable. Today’s enterprise applications are more complex, more integrated, with more security and reliability requirements than yesterday. And I do mean yesterday, requirements for any given IT organization can change by the day!

How can an organization move thousands of applications to modernize simultaneously, in a landscape of changing security, reliability, and operations requirements? If you’ve been following this blog series, then you probably know our answer relies on the value of a hybrid cloud platform. The way application teams and end users interact with your hybrid cloud platform directly impacts their day-to-day coding and deployment. In this article, I’ll focus on some tactics to improve hybrid cloud interfaces on our platforms, ultimately to scale application modernization in such a dynamic environment.

Extend interfaces to integrate services

When application teams need to modernize for a new platform deployment, the organization is likely to be require new integrations and new requirements. These could be security-related, focused on improved reliability, or aimed at optimizing the application’s cost or resourcing. Oftentimes these requirements aren’t focused on the application runtime itself, but on the ecosystem around the runtime. I often summarize this challenge as “No enterprise application is only it’s runtime”.

Let’s imagine an application team needs to create solutions for today’s modern enterprise application architecture and the integrations and management processes that come with it:

Today’s ecosystem of enterprise application components

Even if the scope of application teams that need to solve for this modern landscape is low, the risk is that the organization will be overloaded by multiple solutions to solve the same modernization activity. This is an area where extending the interfaces can introduce a central platform to integrate with other key services and deliver all modern application requirements.

One integration that we created within CIO Hybrid Cloud for meeting logging integration requirements is a custom logging operator (based off the Kubernetes Operator pattern). Since our platform is built using OpenShift as the hybrid cloud abstraction, any automation defined using an operator approach can translate to public or private cloud, and different regions, environments, and configurations. Within the CIO Hybrid Cloud platform, applications can easily forward logs to a central logging service based on a few lines of configuration. Operators within Kubernetes provide a framework for separating the “what” from the “how”, so application teams can focus on what they want to do:

“I need to have application logs sent to an external index”

and not how it’s done. In our platform’s experience, when rearchitecting an application to meet new requirements, the “how it’s done” is where much of the application team’s time is spent. The platform can remove that people cost to provide common solutions for application modernization.

Use common patterns for interfaces

When driving a large application portfolio to a modern, hybrid cloud platform, it’s also important that the independent interfaces for the platform provide common results. This is generally a challenge once the platform is scaled, less so on initial application onboarding. Using our example above, if our logging operator samples logs at a particular rate for one workload, and doesn’t sample logs for another, the application team will likely be confused about what is occurring under the hood. This could also negatively impact the application health, and introduce a larger support need when the application team needs support from the platform team. Fortunately the logging operator doesn’t do that, but this concern could apply to the differences between CLI vs UI interaction, or parts of the platform’s infrastructure.

We applied this “commonality of interfaces” approach to our CIO Hybrid Cloud virtualization offering. For our platform, both container workloads and virtualization workloads are deployed to OpenShift-based environments (the latter managed by OpenShift Virtualization). As expected with container workloads, ephemeral volumes to boot the container runtime are deleted when the container process ends. When adding virtualization within the same platform, part of the design enforces an ephemeral boot disk with the virtualization runtime so that when the virtual machine stops or restarts, the boot disk is deleted. This design organically drives application teams to define application configuration as code, which is required when deploying containers. Regardless of container or virtualization deployment, the application team is getting a similar experience in workload lifecycle.

Design self-service interfaces

These two tactics ultimately support a self-service nature to the platform. When modernizing an application portfolio in the thousands, we cannot rely on human touch-points to complete this activity. The way we interact with the platform and the activities the application team has to undertake need to be accessible. Accessibility toward an interface is different than authorized to use an interface, in that knowledge and support are wrapped around interactions with the platform. With the ephemeral-boot-disk-for-virtual-machines in the last section, what if we made this lifecycle design but didn’t communicate this to end users? The application team would likely assume that their boot disk would persist on a reboot activity when it does not.

With our CIO Hybrid Cloud platform, a rich set of documentation exists for end users to define these interactions with the platform. On top of that, our CIO Hybrid Cloud portal has inline help and workflows baked into how end users interact with the hybrid cloud. And on top of that, if there are issues interacting with the platform, we provide links for the support team and consultation requests in the UI. For example, for authorization registration to our central groups management, the portal notes the expected integration time (around five minutes). This not only integrates the workload automatically to centralized role-based access control, but also explains the expected integration time so that end users don’t need to interact with the platform team (but again, the platform team is there for any support needs). This creates a backlog of improvements to the portal, measured by interactions flowing back to the support team. Anytime a ticket or consultation request is created, we can review these support needs to provide additional built-in knowledge or workflows.

Gather feedback from your interfaces

All of the approaches for architecting hybrid cloud require that a platform team continually reviews the ways end users interact with interfaces. The way we review interface data answers a few questions:

  • What are the most used interfaces on the platform?
  • Are there any interfaces that can be sunset?
  • Are there any interfaces that contain errors?
  • What can end users not interact with in our platform, and do we need to create a new interface?

One of the best points of feedback for platform interfaces is…you! In my years of experience designing and building hybrid cloud platforms, this is one of the pitfalls platform teams can fall into. I would be remiss if I didn’t mention I’ve fallen into this trap myself. Take some time to build and deploy modern applications on your own platform, note the areas of improvement, and the positive interactions you have when deploying on a hybrid cloud platform.

Ben Pritchett is a Chief Platform Architect at IBM based in Raleigh, North Carolina. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.

--

--

Ben Pritchett
Hybrid Cloud How-tos

Ben is a Chief Platform Architect for IBM CIO Hybrid Cloud Platforms