Running your Database on OpenShift and CodeReady Containers

In this post, we’ll break down the steps to help you run your database on OpenShift, using CodeReady Containers. You’ll learn how to deploy the Cass-operator and use the Cassandra cluster.

Let’s take an introductory rundown of setting up your database on OpenShift, using your own hardware and RedHat’s CodeReady Containers.

CodeReady Containers allows you to run OpenShift Kubernetes (K8s) locally, making it ideal for development and testing. Before proceeding, ensure you have a laptop or desktop of decent capability — preferably quad CPUs and 16GB+ RAM.

Let’s get started.

Download and install RedHat’s CodeReady containers

First, download and install RedHat’s CodeReady Containers (version 1.27) as described in Red Hat OpenShift 4 on your laptop: Introducing Red Hat CodeReady Containers.

Now, configure CodeReady Containers from the command line.

Check if the version is correct.

To start it, enter the Pull Secret copied from the download page. This should take around ten minutes.

The output above includes the kubeadmin password which is required in the following oc login … command.

Open your browser, and type the URL https://console-openshift-console.apps-crc.testing

Log in using the kubeadmin username and password, as you did with the oc login … command. You may have to try logging in a few times because of the self-signed certificate.

Once OpenShift has started and is running you’ll see the following webpage:

Figure 1. An OpenShift Interface.

Use the following commands to help check status and the startup process:

Before proceeding, go to the CodeReady Containers Preferences dialog. Increase CPUs and Memory to >12 and >14GB correspondingly.

Figure 2. CodeReady Containers Preferences Dialog Box.

Create the OpenShift local volumes

Cassandra needs continuous volumes for its data directories. You can do this in various ways in OpenShift, from enabling local host paths in Rancher persistent volumes, to installing and using the OpenShift Local Storage Operator, and persistent volumes on the different cloud provider backends.

We’ll use vanilla OpenShift volumes using folders on the master K8s node for this blog post.

Go to the “Terminal” tab for the master node and create the required directories. The master node will be on the /cluster/nodes/ webpage.

https://console-openshift-console.apps-crc.testing/k8s/cluster/nodes/

Click on the node (it should be named something like crc-m89r2-master-0), and then click on the “Terminal” tab. In the terminal, execute the following commands:

You’ll create Persistent Volumes with affinity to the master node, declared in the following YAML. The name of the master node will vary from installation to installation. If your master node isn’t named crc-gm7cm-master-0 then the following command will replace its name.

Note: Before that, download the cass-operator-1.7.0-openshift-storage.yaml file, and check the name of the node in the nodeAffinity sections against your current CodeReady Containers instance. Update it if necessary.

Create the Persistent Volumes (PV) and Storage Class (SC).

To check the existence of the PVs.

To check the existence of the SC.

You’ll find more information on using these commands in the RedHat documentation for OpenShift volumes.

Deploy the Cass-operator

At this stage, you’ll create the Cass-operator. After creating (applying) the Cass-operator, quickly execute the oc adm policy … commands in the following step so the pods have the required privileges and are created successfully.

Next, check the deployment.

You’ll also need to check the Cass-operator pod created and whether it’s running successfully. Note that while we’ve used the kubectl command here, the oc and kubectl commands are interchangeable for all K8s actions.

Troubleshooting tip: If the Cass-operator doesn’t end up in Running status, or if any pods in later sections fail to start, consider using the OpenShift UI Events webpage for easy diagnostics.

Setup the Cassandra cluster

The next step is to create the cluster. The following deployment file creates a 3-node cluster. It is largely a copy from the upstream Cass-operator version 1.7.0 file example-cassdc-minimal.yaml, but with a small modification to allow all the pods to be deployed to the same worker node (as CodeReady Containers only uses one K8s node by default).

Let’s watch the pods get created, initialize, and eventually run using the kubectl get pods … watch command.

Use the Cassandra cluster

With the Cassandra pods each up and running, the cluster is ready to be used. Test it out using the nodetool status command.

Next, test out cqlsh. You’ll need to first get the CQL username and password for authentication.

Keep it clean

CodeReady Containers are simple to clean up, especially because it’s a packaging of OpenShift meant for development purposes only. To wipe everything, just “delete”.

On the other hand, if you only want to delete individual steps, carry out the following commands in order.

And you’re done! You’ve just set up your database on OpenShift using RedHat’s CodeReady Containers. If you get stuck at any point or have a question, ping us on DataStax Developers Discord and we’ll lend you a hand.

Follow the DataStax Tech Blog for more developer stories and tutorials. Check out our YouTube channel for free workshops and follow DataStax Developers on Twitter for the latest news in our developer community.

Resources

  1. Red Hat Openshift 4 on Your Laptop: Introducing Red Hat CodeReady Containers
  2. Using Volumes to Persist Container Data
  3. GitHub: K8ssandra/Cass-Operator
  4. What is Cass Operator? | DataStax Kubernetes Operator for Apache Cassandra

--

--

--

We’re huge believers in modern, cloud native technologies like Kubernetes; we are making Cassandra ready for millions of developers through simple APIs; and we are committed to delivering the industry’s first and only open, multi-cloud serverless database: DataStax Astra DB.

Recommended from Medium

Huw’s Guide to Figuring Out Riichi Mahjong

8 Reasons Why Croatia

TinyXML2: A Convenient C++ XML Library

Software outsourcing destinations: Ukraine vs India

Slack Incoming Webhooks using Python

Java Thread Synchronization

Puzzle #11: can you guess how many triangle there are?

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
DataStax

DataStax

DataStax is the company behind the massively scalable, highly available, cloud-native NoSQL data platform built on Apache Cassandra®.

More from Medium

How is eBPF efficient for observability

Our evolving EKS environments

Logging Bash History via Promtail, Loki and Grafana

Build a CI/CD pipeline with GitHub Actions to deploy a Node.js