Installing Ceph Object Storage on an Openshift 4.X cluster via the Rook Operator.

Andrew Stoycos
The Startup
Published in
4 min readMay 26, 2020

Although modern cloud orchestration platforms such as Kubernetes and Openshift use stateless containers as their base functional unit, there are times data must be persisted. Ceph Object Storage (COS) provides easy file storage using a restful gateway interfaced with with either S3 or Swift APIs. For the Realtime Edge Streaming Demo I needed to store analyzed video segments with Ceph, this article explains the process I followed to deploy it on an Openshift Cluster.

This setup was tested on Openshift 4.X clusters running on both AWS and custom Openstack based PSI infrastructure. Install and provisioning of Ceph object storage is handled by the Rook Operator. To begin, clone this article’s demo Repository, or download a specific rook release from source, I used V1.3.2. Next navigate to the demo or source code directory with.

cd rook-1.3.2/cluster/examples/kubernetes/ceph/or cd ceph-demo-resources/yamls

For this instance I decided to deploy the Ceph Cluster on Persistent Volume Claims, solely so that I would not have to worry about manual device creation or configuration when deploying across various vendors with different underlying hardware. To start, deploy the Ceph common resources with

 oc create -f common.yaml 

Next deploy the rook operator for Openshift with

oc create -f openshift-operator.yaml 

One thing to notice with the Openshift rook operator deployment vs the plain Kubernetes one is:

  • ROOK_HOSTPATH_REQUIRES_PRIVILEGED: Must be set to true due to added securities in Openshift.

Once the Rook operator pod is up it is time to deploy the actual ceph cluster on persistent volume claims with:

oc create -f cluster-on-pvc.yaml

In the default deployment yaml, the storageClassName is set to gp2. A storageClass allows the infrastructure managers to describe the types of resources available on the cluster for persistent storage. The className gp2refers to the default class for AWS K8’s instances, in order to deploy to the cluster’s default storageClassremove the storageClassName.

Then to deploy the pods associated with the object storage portion of Ceph run:

oc create -f object-openshift.yaml

Finally, create a new Ceph Object Store user named ceph-demo-user by running:

oc create -f object-user.yaml

Now, after a couple of minutes the osd,mon,mgr,and rgw pods will be done deploying and your ceph object storage should be up and running!

Before we can interact with the Ceph cluster from outside the Openshift cluster we also need to create a route that directs external traffic to Ceph with:

oc create -f route.yaml

To even further check the status of the Ceph cluster deploy the toolbox with:

oc create -f toolbox.yaml

Then run the following to access the toolbox container:

oc -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash

Followed by

bash-4.2$ ceph status

Which should return as follows with the statusHEALTH_OK

bash-4.2$ ceph status
cluster:
id: 7a5a2f97-af48–41d3–87a8-c4688bebab06
health: HEALTH_OK

services:
mon: 3 daemons, quorum a,b,c (age 16m)
mgr: a(active, since 15m)
osd: 3 osds: 3 up (since 14m), 3 in (since 14m)
rgw: 1 daemon active (my.store.a)

data:
pools: 7 pools, 80 pgs
objects: 201 objects, 3.8 KiB
usage: 3.0 GiB used, 27 GiB / 30 GiB avail
pgs: 80 active+clean

Now that the Ceph object storage cluster is up and running we can interact with it via the S3 API wrapped by a python package with an example provided in this articles’ demo repo.

To start we need to provide the example program, ceph.py, with the route to the Ceph cluster, and the Ceph access/ secret keys.

s3_endpoint_url = ""s3_access_key_id = ""s3_secret_access_key = ""s3_bucket = 'mybucket'

The route can be found with:

oc get route ceph-route -o jsonpath={.spec.host}

And the Ceph keys can be found with:

oc secrets rook-ceph-object-user-my-store-ceph-demo-user -o jsonpath={.data}

Finally to run the demo simply execute:

python ceph.py 

This will show how to list, create, and delete Ceph buckets while also uploading/downloading files to the bucket as well.

[astoycos@localhost ceph-install-resources]$ python ceph.py 
Initial bucket List: []
Trying to make 'mybucket'
Updated bucket List: ['mybucket']
Upload some demo images to Ceph Object Storage
List files currently stored in ceph
demo-pic-1.jpeg was last modified at 2020-05-21 16:52:35.589000+00:00
demo-pic-2.jpeg was last modified at 2020-05-21 16:52:35.754000+00:00
demo-pic-3.jpeg was last modified at 2020-05-21 16:52:35.861000+00:00
demo-pic-4.jpeg was last modified at 2020-05-21 16:52:36.065000+00:00
Download files from ceph
Remove bucket from ceph
Downloaded images from ceph can be seen in 'demo-files-out/' directory
Delete downloaded files in `/demo-files-out`? y/n
y

Thanks for reading, and feel free to post any issues on the article’s repository with any questions.

--

--

Andrew Stoycos
The Startup

Software Engineer currently working at RedHat based in Boston, Fascinated with cloud technologies and the industry’s push towards Edge computing