Write once, deploy anywhere — consistent storage for container workloads

Ben Randall
3 min readFeb 28, 2024

Whether you’re deploying containers or virtual machines (or likely both) in Red Hat OpenShift, your applications are going to need some amount of storage. Sure, not every workload is a storage intensive database, but even what we typically think of as stateless apps tend to maintain some level of state somewhere. So what are your choices?

  • Object storage — a scalable repository for unstructured data. We often equate object (and S3) with public cloud, but there are numerous on-prem options available.
  • File storage — popular for container workloads that need to manage their state locally. Shared file storage is the only way to provide RWX volumes with the filesystem volume mode.
  • Block storage — popular with VMs. Some container apps use block storage, but typically the requirement is RWO persistent volumes, not necessarily volume-mode block (meaning you can really use file instead).

There are plenty of ways that we can achieve these storage needs. Hosting your clusters on Amazon? Grab some EBS volumes and a few buckets. Running on Azure? You have your choice of storage classes and blob storage. Deploying on-premises? You have choices too — CSI, container native storage, local devices…. It all depends on how OpenShift is deployed. Is it on VMware? Bare metal? IBM Power? Or do you have mainframes running LinuxOne?

Yes, you have choices, but therein lies the problem. The way in which your application consumes storage is tightly coupled to the underlying platform. And most enterprises aren’t just running on a single platform. They have OpenShift deployed on one or more public cloud providers, as well as deployments in their private data centers. In the latter case, we see businesses shifting from deployments on VMware to bare metal, which means now you have clusters running on both sets of infrastructure — and consuming storage differently in both cases.

IBM Fusion provides a “write once, deploy anywhere” solution for this problem. Whether you are running OpenShift on public cloud, bare metal, VMware, Power, or Z, your applications are going to have a consistent way to access storage. They’ll use the same storage classes, they’ll get the same high availability, they’ll get the same encryption at rest. You won’t need to refactor how your applications consume storage when you push them from your on-prem dev/test environment to production in the cloud.

In that sense, Fusion virtualizes how storage is presented by all of these different underlying your applications. And that’s important, because you don’t want to be locked into a given infrastructure platform. You want to move your apps to where they need to run, and do it on your timeline. You need the ability to switch out your infrastructure platform without refactoring, allowing your application developers to focus on their domain expertise, rather than on adjusting the application based on infrastructure details.

Block, file, and object storage are provided consistently, wherever OpenShift is deployed.

So, how do you accomplish that? First, install Fusion and its Data Foundation service in your OpenShift cluster. Data Foundation is a software defined storage provider built on open source Ceph. Data Foundation will work with the underlying platform infrastructure to create a software defined storage cluster. If you’re deployed on the public cloud, you’ll choose a tier of storage to use. If you’re installed on bare metal, Data Foundation will discover the underlying storage devices. And if you’re on VMware, Data Foundation can either look for local devices, or dynamically provision storage via vSphere volume abstractions.

Regardless, you’ll get block and file storage classes, as well as an object storage gateway. And they will provide a consistent experience wherever OpenShift runs. Just build the storage classes into your app deployments, and deploy the app wherever you need it.

Check out the video below, where I show how to install Data Foundation in an OpenShift cluster running on bare metal.

Learn more about IBM Fusion and Data Foundation storage.

--

--

Ben Randall

I'm a software development architect, and I've focused my career on enterprise storage and container workloads.