How to install IBM Cloud Object Storage Kubernetes Plugin

Raju Pavuluri
4 min readJul 17, 2021

--

I was playing with an ML program on my Kubernetes cluster. The program was supposed to scale and score a ton of data. So the first question was where to store this data and how to access it in my cluster. There were several solutions out there — I wanted to try IBM Cloud Object Storage (COS) first. First step for me was to install a plugin that would enable my Kubernetes cluster to access COS data.

This tutorial has all the instructions needed to get IBM Cloud Object Storage plugin installed onto your Kubernetes cluster. This article assumes you have docker installed and have a Kubernetes cluster running. My version of Kubernetes cluster is 1.20.2. When I tried to install the plugin and required binaries, I had to make some changes to make the process work for my cluster as explained below.

As specified in the plugin documentation, IBM Cloud Object Storage plug-in is a Kubernetes volume plug-in that enables Kubernetes pods to access IBM Cloud Object Storage buckets. The plug-in has two components: a dynamic provisioner and a FlexVolume driver for mounting the buckets using s3fs-fuse on a worker node. So, we will install s3fs driver and build the provisioner for plugin installation.

COS architecture image from https://github.com/IBM/ibmcloud-object-storage-plugin

Building s3fs binary

Note: These are instructions for rhel/centos7. For other versions of Linux, please visit https://github.com/s3fs-fuse/s3fs-fuse/wiki/Installation-Notes to get installation instructions.

s3fs allows Linux to mount an S3 bucket via FUSE. While you can technically install s3fs using yum, I had trouble getting it to work that way. So, I chose to build the code and install it myself.

Install necessary pre-reqs.

yum install gcc libstdc++-devel gcc-c++ fuse fuse-devel curl-devel libxml2-devel mailcap git automake makeyum install openssl-devel

Now clone s3fs repository and build the binaries.

git clone https://github.com/s3fs-fuse/s3fs-fusecd s3fs-fuse/./autogen.shmakemake install

Building a docker image with the plugin

We are now going to build the actual plugin. Again, there are helm charts available to install this plugin, but it looks like they are customized for clusters running on the Cloud. So, if you have a cluster that you are running on your own, like I do in this case, I suggest you build the plugin, customize and install following these instructions. For this to work, go and glide are pre-requisites.

Install go following these instructions https://golang.org/doc/install

Install glide following these instructions https://glide.readthedocs.io/en/latest/

Set GOPATH env variable in your linux terminal .

export GOPATH=$HOME/go

Clone the repository.

mkdir -p $GOPATH/src/github.com/IBMmkdir -p $GOPATH/bincd $GOPATH/src/github.com/IBM/git clone https://github.com/IBM/ibmcloud-object-storage-plugin.gitcd ibmcloud-object-storage-plugin

Build the provisioner and driver.

make
make provisioner
make driver

Verify that a docker image for the COS plugin is built.

docker images|grep ibmcloud-object-storage-plugin

Push the provisioner container image to the image repository currently used in your Kubernetes cluster.

docker tag ibmcloud-object-storage-plugin:latest <your-docker-repo/your-namespace>/ibmcloud-object-storage-plugin:latestdocker push <your-docker-repo/your-namespace>/ibmcloud-object-storage-plugin:latest

Installation of the plugin

These instructions are as documented in the plugin repo, but with some customization for it to work on our cluster.

Make the driver binaries available on every worker node

Make the driver binary that you just build above ibmc-s3fsavailable on every worker node, say, by copying under /tmp

Now on every worker node, execute the following commands to copy the driver binary ibmc-s3fs to Kubernetes plugin directory. Then restart kubelet for the changes to take effect.

$ sudo mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ibm~ibmc-s3fs$ sudo cp /tmp/ibmc-s3fs /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ibm~ibmc-s3fs$ sudo chmod +x /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ibm~ibmc-s3fs/ibmc-s3fs$ sudo systemctl restart kubelet

Some changes to provisioner.yaml

As I had mentioned earlier, this didn’t quite work right away. I needed to make some changes to provisioner yaml file.

Locate the provisioner.yamlfile in the plugin directory that you just built. It should be in the deploy sub folder. Open it in an editor.

Change the api version.

apiVersion: apps/v1

Add a selector under the spec section in the yaml file.

selector:
matchLabels:
app: ibmcloud-object-storage-plugin

Change the location of the plugin image under the containers section to where you pushed it to.

containers:
- name: ibmcloud-object-storage-plugin-container
image: <your-docker-repo/your-namespace>/ibmcloud-object-storage-plugin:latest

Add an image pull secret under spec section, if required (depending on how you normally pull images to your cluster).

imagePullSecrets:
- name: <your-docker-image-pull-sercet>

Create the provisioner

$ kubectl create -f deploy/provisioner-sa.yaml
$ kubectl create -f deploy/provisioner.yaml

Create the storage class

$ kubectl create -f deploy/ibmc-s3fs-standard-StorageClass.yaml

Verify the plugin installation

$ kubectl get pods -n kube-system | grep object-storage
ibmcloud-object-storage-plugin-7c96f8b6f7-g7v98 1/1 Running 0 28s

Verify the storage class

$ kubectl get storageclass |grep s3
ibmc-s3fs-standard ibm.io/ibmc-s3fs

Now you have IBM Cloud Object Storage plugin installed on your cluster. We will discuss the usage in the next article.

--

--