Installing OpenShift Data Foundation on an OpenShift Satellite Cluster

How we added reliable storage for IBM Cloud Satellite clusters

Chirag Kikkeri
AI+ Enterprise Engineering
5 min readNov 2, 2022

--

The IBM Pre-Sales Engineering Lab often configures satellite environments for clients, and one topic that comes up is the need for reliable storage for the satellite clusters. To meet this need, the team turned to RedHat’s OpenShift Data Foundation (ODF). ODF allows for consistent block, file, and object storage with storage classes regardless of where the application is running, making it a great tool for the team.

The main prerequisite for ODF is to have a satellite environment configured, with control planes and worker nodes. One important detail is that the worker nodes must have at least 16 vCPUs and 64 GB of RAM. This article uses the ODF docs as reference.

Setup Storage for Worker Nodes

The first step of the setup is to attach Storage Volumes to each of the worker nodes. On the IBM Cloud website, click on the menu button on the top left, then VPC Infrastructure -> Virtual Server Instances, then select the first worker node. Go to Storage volumes -> Attach. For ODF, we need two disks: one for Object Storage Daemon (OSD) and one for Monitoring (MON). I used 100 GB for OSD and 50 GB for MON, but my configuration was for testing purposes, so for a full implementation, more storage may be necessary.

Repeat this step with 50 GB instead for the MON disk. Now, add both disks to the rest of the worker nodes.

Record Disk IDs

Now, we need to record the IDs of the disks we just created. First, sign into IBM Cloud and OpenShift on the terminal. The next step is to debug each worker node and view the disk configuration. To do this, run these commands (<node-name> is the name of your worker node):

In this screenshot, vde and vdf represent the two disks we created. If the output does not include the two disks, double check that the two disks were created correctly. Next, we need the IDs of the disks. Run this command:

We need to record the IDs for ‘vde’ and ‘vdf’. For example, we would save ‘virtio-02b7–54f39ca7–262f-4’ for ‘vde’. Repeat this process for the other worker nodes.

Create ODF Configuration

Login to IBM Cloud on the terminal and navigate to the region of your satellite by using this command (using us-east as an example):

Next, view storage templates by running this command:

This lists Satellite storage templates, and in this case, we want to use odf-local. In this guide, we will be using version 4.8, so make sure that version is available.

In the next command, we will create the configuration. The bulleted list displays the parameters needed and what should be inputted for each one. For more information on the parameters, click on this link.

  • name: name for storage config
  • template-name: odf-local
  • template-version: 4.8
  • location: name of Satellite location
  • ocs-cluster-name: name for cluster that will be created
  • iam-api-key: your IBM Cloud api key (instructions)

The other parameters we need to input are the file paths for the storage disks we created. There is a way to have ODF auto detect the disks by adding this to the command:

However, I had some problems with this configuration, so I instead inputted the file paths manually. There is one parameter left: ‘osd-device-path’. The format for this is

Here, we replace <device-1> with our id for the vde block of worker node 1(‘virtio-02b7–54f39ca7–262f-4’). Here is what the final command looks like:

If the configuration worked with no errors, the output should show that the configuration was created successfully. Verify again that it was created by running and make sure the created configuration is shown.

Apply ODF Configuration to Cluster

Now that the configuration is created, we can apply it to the cluster. The first step is to get the cluster ID.

The command to apply the configuration is

Here, <cluster> is the cluster ID, <config> is the name of the configuration we created, and <name> is what we want to name the assignment.

Now, we can run some commands to verify that the resources have been created correctly.

If you are able to see all the correct outputs, you have successfully installed ODF!

Summary

This article demonstrated how to install OpenShift Data Foundation on a satellite cluster. With the ability to have reliable block, file, and object storage, you can take your application to the next level with ODF.

--

--