IBM Storage Fusion HCI: A peek inside the OpenShift Container Platform bare metal installation

Anshu Garg
4 min readMar 4, 2024
Figure 1: IBM Storage Fusion HCI, a bare metal OpenShift container platform

IBM Storage Fusion HCI is an OpenShift shipped in box to your data centre with compute, storage and network. It is a container-native hybrid cloud data platform ideal for simplified deployment and data management for Kubernetes applications on OpenShift.

Foundation of Fusion HCI is the OpenShift installed on bare metal with IPI method provided by Red Hat that simplifies creation and expansion of cluster. In this article I’ll give you peek inside the Fusion HCI installation process.

Initial three nodes OpenShift and Fusion HCI deployment can be broken down in three broad categories:

  • Network configuration
  • Three nodes OpenShift cluster deployment
  • Fusion software deployment

In this article our focus is OpenShift. Network configuration deserves it’s own post.

At base of Fusion HCI is the OpenShift Container Platform on bare metal deployed with IPI. Let’s quickly understand what is IPI installation. You can view IPI installation as two part process.

In first phase of installer-provisioned installation on bare metal nodes deploys and configures the infrastructure that an OpenShift cluster runs on. OpenShift temporary control plane is created using three control nodes where etcd cluster is created and API virtual IP address (VIP) is used to provide failover of the API server across the control plane nodes. The API VIP in this phase, resides on the bootstrap VM.

Figure 2: OpenShift bare metal IPI phase 1 network diagram

In second phase of installer-provisioned installation bootstrap VM is removed automatically and moves the virtual IP addresses (VIPs) to the appropriate nodes.

The Ingress VIP moves to the worker nodes at this point for IPI however, since Fusion HCI at this point only has three control nodes in OpenShift cluster, even Ingress VIP it moved to control nodes that act as compute nodes too till more nodes are added to OpenShift cluster as compute nodes later.

Figure 3: OpenShift bare metal IPI phase 2 network diagram

Some key terms:

  • Provisioner: Physical machine that runs the installation program and hosts the bootstrap VM that deploys the control plane of a new OpenShift Container Platform cluster.
  • Bootstrap VM: A virtual machine used in the process of deploying an OpenShift Container Platform cluster.
  • Network bridges: The bootstrap VM connects to the bare metal network and to the provisioning network, via network bridges, eno1 and eno2.
  • API VIP: An API virtual IP address (VIP) is used to provide failover of the API server across the control plane nodes.
  • Ingress VIP: An Ingress virtual IP address (VIP) is used to provide failover of the Ingress across the compute nodes.

Now that we have learnt basic IPI mechanism, it’s time to understand how it is leveraged in Fusion HCI deployment for simplified, reliable and consistent installation of OpenShift on-prem. Fusion HCI comes with minimum 6 nodes. Fusion HCI uses both bare metal and provisioning network and to deploy Core OS on nodes, use PXE mechanism over provisioning network. As seen in figure 4, each rack has a compute-storage node at rack unit (RU)7 that serves as provisioner node.

Figure 4: Fusion rack with 10 nodes and 4 switches

Bootstrap VM is a KVM that is created on provisioner machine during OpenShift installation’s phase 1.

Network bridges on provisioner node are created out of the box when shipped from IBM factory.

Both VIPs must be reserved by your data centre network team and must be in same CIDR as your OCP.

When installation process in launched in Fusion HCI, fully automated installer collects data about your data centre from network configuration of provisioner and run numerous validations to check critical pre-requisites to have met for successful OpenShift installation upfront, thereby bringing the most important value of simplified installation of OpenShift cluster with best practices.

With collected data and user inputs provided in installation wizard, installer generates cluster manifest and starts cluster initialisation. This involves creating a KVM (bootstrap) on provisioner. On kvm, ironic apis are used to inspect and prepare infrastructure for control plane by deploying Core OS using PXE boot. During this process bare metal hosts are created which in turn are used to create machine and OCP nodes.

Once 3 nodes OCP cluster is created, based on user choices during installation wizards, post installation OCP configurations are done, for example, if ingress certificate was provided then custom ingress certificate is configured. Or, if proxy details were specified then cluster wide proxy is setup.

Once applicable post OCP installation tasks are completed, additional OCP network configuration is done to enable Fusion management software communication to bare metal host on management network of Fusion HCI.

At this stage, you have a 3 nodes OpenShift cluster functional with CoreOS running on nodes. And you are ready to add more compute nodes (with CoreOS) to cluster just as easily from Fusion GUI.

To learn more about IBM Fusion HCI visit https://ibm.github.io/storage-fusion/fusion-hci/overview

Disclaimer: The above article is personal and does not necessarily represent IBM’s positions, strategies, or opinions.

--

--