Deploying and monitoring a Redis cluster to Oracle Container Engine (OKE)

Ali Mukadam
Oracle Developers
Published in
6 min readApr 26, 2019

In the previous post, we added a simple extension to the terraform-oci-oke project so that it uses the Redis helm chart to deploy a Redis cluster on Kubernetes.

In this post, we will attempt something a bit more ambitious:

  • deploy a Redis Cluster as in the previous post
  • monitor the Redis cluster with Prometheus
  • populate the Redis cluster with existing data using Redis Mass Insertion
  • visualize the mass insertion process with Grafana

For the sake of convenience, we will do a manual deployment of Prometheus and Redis. However, if you are using the terraform-oci-oke module (or any Kubernetes cluster for that matter), you can achieve the same result by using the helm provider as described in the previous post.

Architecture

Conceptually, this is what we are trying to do:

Deploy Prometheus Operator

Create a namespace for Prometheus:

kubectl create namespace monitoring

If you are using the terraform-oci-oke module and have provisioned the bastion host, helm is already installed and pre-configured for you. Just login to the bastion and deploy the Prometheus operator:

helm install --namespace monitoring \
stable/prometheus-operator \
--name prom-operator \
--set kubeDns.enabled=true \
--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false \
--set coreDns.enabled=false \
--set kubeControllerManager.enabled=false \
--set kubeEtcd.enabled=false \
--set kubeScheduler.enabled=false

Setting serviceMonitorSelectorNilUsesHelmValues to false ensures that all ServiceMonitors will be selected.

Get a list of pods and identify the Prometheus pods:

kubectl -n monitoring get pods | grep prometheusalertmanager-prom-operator-prometheus-o-alertmanager-0   2/2     Running   0          18s                                                        
prom-operator-prometheus-node-exporter-9xhzr 1/1 Running 0 24s
prom-operator-prometheus-node-exporter-qtbvv 1/1 Running 0 24s
prom-operator-prometheus-node-exporter-wjbfp 1/1 Running 0 24s
prom-operator-prometheus-o-operator-79ff98787f-4t4k7 1/1 Running 0 23s
prometheus-prom-operator-prometheus-o-prometheus-0 3/3 Running 1 11s

On another terminal, set your local KUBECONFIG environment variable and run kubectl port-forward locally to access the Prometheus Expression Browser:

export KUBECONFIG=generated/kubeconfigkubectl -n monitoring port-forward prometheus-prom-operator-prometheus-o-prometheus-0 9090:9090

Open your browser and access the Prometheus Expression Browser to verify the targets at http://localhost:9090/targets

Prometheus targets

Next, we want to verify that Grafana has been configured properly and already has Prometheus as a datasource. Get a list of pods and identify the Grafana pods:

kubectl -n monitoring get pods | grep grafanaprom-operator-grafana-77cdf86d94-m8pv5 2/2     Running   0          57s

Run kubectl port-forward locally to access Grafana:

kubectl -n monitoring port-forward prom-operator-grafana-77cdf86d94-m8pv5 3000:3000

Access Grafana by pointing your browser to http://localhost:3000

Login with admin/prom-operator (default username and password if you have not changed them). You should be able to see the default Kubernetes dashboards.

Deploy Redis Cluster

Create a namespace for redis:

kubectl create namespace redis 

Use helm to deploy the Redis cluster:

helm install --namespace redis \
stable/redis \
--name redis \
--set cluster.enabled=true \
--set cluster.slaveCount=3 \
--set master.persistence.size=50Gi \
--set slave.persistence.size=50Gi \
--set metrics.enabled=true \
--set metrics.serviceMonitor.enabled=true \
--set metrics.serviceMonitor.namespace=monitoring

Access the Prometheus Expression Browser again and verify that Redis is now listed as one of the targets:

Prometheus now with Redis target

Import Redis Dashboard for Grafana

Login into Grafana again as above and click on the ‘+’ icon on the left menu to import a dashboard and enter the dashboard id 2751 in the Grafana.com dashboard field:

After the dashboard is loaded, select the Prometheus datasource:

and then click on Import. You should now have a functioning Redis dashboard in Grafana:

Mass Insert Data into Redis

Let’s now do a mass insertion of data into Redis. I found this neat gem to load a csv into redis.

Given a csv file of the following format:

id, first name, age, gender, nickname, salary
1, John Smith, 40, Male, John, 10000
2, Marco Polo, 43, Male, Marco, 10000
….
1999999, Tom Cruse, 50, Male, Tom, 10001

the following command can be used to import it into Redis:

awk -F, 'NR > 1{ print "SET", "\"employee_"$1"\"", "\""$0"\"" }' file.csv | redis-cli --pipe

First, we have to generate the dataset. We will be using the mimesis package:

pip install mimesis

And we will adapt the schema a little bit so we can make use of whatever mimesis provides to create a csv file using Python:

import csvfrom mimesis import Personfrom mimesis.enums import Genderen = Person('en')with open('file.csv',mode='w') as csv_file:field_names = ['id', 'full name', 'age', 'gender', 'username', 'weight']writer = csv.DictWriter(csv_file, fieldnames=field_names)writer.writeheader()for n in range(100000):writer.writerow({'id': str(n), 'first name': en.full_name(), 'age': str(en.age()), 'gender': en.gender(), 'username':en.username(), 'weight':str(en.weight())})

Run the python script to generate the data:

python names.py

This will create a file.csv in the current directory. You can configure a PersistentVolume to store and load the data but for the purpose of this exercise, we will do a quick hack by installing redis in the bastion:

sudo yum install redis -y

This will allow us to use the redis-cli from the bastion where we have generated/uploaded the file.csv.

On the bastion, get a list of Redis pods:

kubectl -n redis get podsNAME                             READY   STATUS    RESTARTS   AGE                                                                                
redis-master-0 1/1 Running 0 156m
redis-metrics-794db76ff7-xmd2q 1/1 Running 0 156m
redis-slave-7fd8b55f7-25w8d 1/1 Running 1 156m
redis-slave-7fd8b55f7-hvhmc 1/1 Running 1 156m
redis-slave-7fd8b55f7-mjq8q 1/1 Running 1 156m

and use port-forward to so you can access it using the redis-cli:

k -n redis port-forward redis-master-0 6379:6379
Forwarding from 127.0.0.1:6379 -> 6379

Open a new terminal, login into the bastion and obtain the Redis password:

export REDIS_PASSWORD=$(kubectl get secret --namespace redis redis -o jsonpath="{.data.redis-password}" | base64 --decode)

Do a quick test to see you can connect to Redis:

redis-cli -a $REDIS_PASSWORD127.0.0.1:6379> ping                                                                                                                                                        
PONG
127.0.0.1:6379>

Before we import the csv, access Grafana (http://localhost:3000) as described above by opening a 3rd terminal and running kubectl port-forward locally. Browse to the Redis Dashboard and set the refresh to every 5 seconds:

kubectl -n monitoring port-forward prom-operator-grafana-77cdf86d94-m8pv5 3000:3000

Now import the csv file as follows:

awk -F, 'NR > 1{ print "SET", "\"employee_"$1"\"", "\""$0"\"" }' file.csv | redis-cli -a $REDIS_PASSWORD --pipeAll data transferred. Waiting for the last reply...                                                                                                                   
Last reply received from server.
errors: 0, replies: 1000000

and watch the Redis dashboard in Grafana. You can see the immediate jump in Network IO, the number of items in the DB as well as the amount of memory used.

Redis Dashboard after mass insertion

While we installed the Prometheus Operator and Redis Cluster manually using the cli, you can also achieve that using the Terraform helm provider. As you are enabling monitoring on Redis, you need to ensure the relevant CRDs are created. When you are doing that manually and in the order above, this is done for you.

However, when you use Terraform to do the provisioning, you will need to explicitly set the order as follows:

resource "helm_release" "prometheus-operator" {
...
...
...
}resource "helm_release" "redis" { depends_on = ["helm_release.prometheus-operator"]
...
...
...
}

By doing the above, you ensure that the prometheus-operator release is created first along with the necessary CRDs that the redis release will need (e.g. Alertmanager, Prometheus, PrometheusRule, ServiceMonitor) in order for Prometheus to be able to monitor the Redis cluster.

--

--