Journey To a Distributed Cloud — Introduction

Greg Hintermeister
AI+ Enterprise Engineering
6 min readApr 8, 2021

This series will show, through video demos and blog entries, how clients can successfully fulfill three top outcomes through a variety of capabilities provided by IBM and Red Hat.

This is part 1. You can jump to part 2, part 3, or view all videos here.

Introduction

As you now know, the Cloud Engagement Hub is focused on helping our clients accelerate their cloud journey. Throughout my work, I have focused on what our clients worry about, what they are skeptics of, and what we can do to accelerate their vision.

Recently, I have captured three outcomes that have often risen as our client’s top needs:

  • Build, deliver, and manage our applications across a hybrid multi-cloud environment
  • Bring the best of cloud wherever we need it to run, yet maintain the simple “as a service” experience
  • Use a common control plane to manage our apps, platform, and security across our multi-cloud platform

As you think about these three outcomes, I’m sure you will agree they are quite technical. And for us that’s really the point: our goal is that if we can deliver these technical outcomes using the best that IBM has to offer, our clients will no longer see technology as a barrier, but rather as an accelerant to their overall business outcomes.

How? Let’s take a closer look at each outcome.

Build, Deliver, Manage Apps

If our clients can actually build, deliver, and manage their apps across a hybrid multi-cloud environment in a consistent and automated way, then they can focus how to most quickly bring innovation to their customers. For example, if they want to protect workers in a factory, and their real-time analysis app needs to run in proximity of the workers, they can do that without long delays working on technical limitations.

This applies to both new applications that tap into IoT, AI, that are built using cloud-native practices, and to legacy applications that are just starting their modernization journey. Think of it like this: How much could you gain if your development teams used the same tools, deployed onto the same platform, and used the same SRE-based automation to manage the apps (and the platform they’re running on)? This is what we are able to deliver in the Cloud Engagement Hub. We have mentioned several times our application journeys consist of a mix of containerization, replatforming, refactoring, enriching, as well as a mix of migration paths like VM migration. If we can help clients build, deliver, and manage their legacy applications with the same efficiencies as their cloud native applications, then clients can start focusing on innovation, not managing their tech.

Bring the Best of Cloud Wherever Needed

If our clients can bring the “best of cloud” wherever they need it to run, they have the freedom to choose the best AI, the best Kubernetes platform, the best data sources, and run them where they are needed. Further, if they can maintain a simple “as a service” experience, our clients can focus on business innovation, user experience, and employee productivity; and not have to spend time and resources managing tech. In other words, they can innovate in their industry, not fill their days managing a platform, or an AI suite.

For us in the Cloud Engagement Hub, this is a force multiplier of the first outcome. How? Take a look at a couple examples:

Kubernetes platform

We know that most clients run containerized workloads in Kubernetes, and even serverless capabilities is most efficient when run on a Kubernetes platform. We also, through experience, know that in a cloud environment, it’s so easy to spin up additional Kube clusters that each dev team can have their own. The benefits are many: more isolation, fast to create, fast to delete, and it’s simpler to manage. However, for an OnPrem environment, an IT team must take the time to stand up a cluster, along with the master nodes and infra nodes, and as a result, dev teams many times share a larger cluster, isolated through namespaces. This is a great pattern, but it’s simply more work and requires constant management.

However, if an IT team could just spin up additional clusters as easy as it was in cloud, they could focus on more important things than maintaining namespace isolation. To achieve that, the IT team needs a Kubernetes service managed by a cloud provider with deep SRE experience, like IBM.

Software/Middleware

Cloud services have spoiled us. From consumer services (Flickr, Box, Gmail, …) to SaaS for businesses, to cloud catalog services used by developers (AI, object storage, data sources), the speed at which they can be consumed is really good. But the fact that we don’t need to manage the software like we did “back in the days of yore” with applying updates, patches, and continually monitoring for the inevitable glitches? That creates a sense of freedom that is hard to ignore.

As a result, the challenge becomes: how do we build, deliver, and manage apps, across any location, where the required software/middleware services also need to be in proximity to the app? In the earlier example of an app protecting workers in a factory by using AI to analyze video feeds, that AI needs to also run in the factory. The challenge is: How can it be available “as a service” so we don’t need dedicated software maintenance teams? To achieve that, the dev team needs an AI service deployed and managed by a cloud provider with deep SRE (and AI) experience, like IBM.

Utilize a Common Control Plane

Finally, if our clients can achieve the ability to build, deliver, manage applications, and utilize the best of cloud in any location with a simple ‘as a service’ experience, it is imperative that this hybrid multi-cloud environment that is distributed across many locations around the world, can be managed using a common control plane. This includes the applications that are deployed, the software/middleware that is deployed, the platform those applications (and middleware) are running on, and the security policies governing how secure the distributed cloud is.

The challenge here is: How can an environment with locations distributed world-wide be centrally managed? How can a developer use logging and monitoring for a range of DevSecOps activities, or an SRE use the control plane to automate away toil? Is there a way to have common private connectivity so my pipelines can deliver not just the deployment files but the image…scanned and validated? To achieve that, all personas need a way to tap into a centralized control plane with connectivity throughout the distributed cloud, managed by a cloud provider with deep integration experience, like IBM.

The Series — Journey to A Distributed Cloud

My goal for this series is to show, through video demos and blog entries, how clients can fulfill these three desired outcomes through a variety of capabilities provided by IBM and Red Hat.

Here is a summary of each session:

1. How to Establish your Distributed Cloud using IBM Cloud Satellite
To start this journey, you need a distributed cloud. The great thing is it’s not a “go buy a fleet of clusters on day one”. Rather, it’s a method, a way of working, that enables you to build as you need it. This first session will dive into IBM Cloud Satellite and how it works.

2. How to Deploy and Manage Apps Across your Distributed Cloud
A distributed cloud isn’t worth much if you can’t build, deploy, and manage your applications across that fleet. Part 2 will focus on how applications can be deployed across multiple clusters using IBM Cloud Satellite.

3. Observability — Common Logging, Monitoring, Auditing
With your distributed cloud provisioned, and your applications running across it, observability is essential. Part 3 will focus on how IBM’s control plane can be used regardless of where the apps, platform, and middleware are running.

4. Extending the reach of your distributed cloud into private services
Every cloud has unique services, and some are better than others. Further, most clients have already extensively used certain services. Part 4 will show how IBM Cloud Satellite can extend the reach of an application through a secure (and audited) connection via “Link Endpoints”

5. Red Hat Advanced Cluster Management + IBM Cloud Satellite: A Great Pairing
Managing multiple Kubernetes clusters can be a challenge, or it can be quite simple. Part 5 focuses on how pairing IBM Cloud Satellite for getting the ‘as-a-service’ SRE-managed OpenShift anywhere in the world, along with Red Hat Advanced Cluster Management for deeper cluster compliance and configuration, can be an ideal combination.

6. … and more to come, based on your suggestions…

I look forward to this! Stay tuned for each session, and I will be editing this with links to each one as they are published.

A Last Question

Let me leave you with this question: What examples, demos, behaviors would you like me to show off that are not listed here?

--

--

Greg Hintermeister
AI+ Enterprise Engineering

Greg is an inventor, musician, believer, husband, father, parrothead. His expertise can be found helping clients, his heart can be found wherever his wife is.