IBM Spectrum Scale CNSA + Grafana on RedHat OCP

Ole Kristian Myklebust
Possimpible
Published in
13 min readFeb 19, 2021

The first release of the containerized version of Spectrum Scale also called CNSA, was released in December and with that, there is an early release for Grafana bridge with CNSA.

Update: 11.01.2022: this one is outdated, new one will come soon.

IBM Spectrum Scale bridge for Grafana

Grafana Bridge is a standalone Python application. It translates the IBM Spectrum Scale metadata and performance data collected by the IBM Spectrum Scale performance monitoring tool (ZiMon) to the query requests acceptable by the Grafana integrated openTSDB plugin.

Picture from IBM GitHub

And with this version, we can run Scale Bridge and Grafana PODs inside OpenShift.

Example dashboard:

Note: The Grafana Examples that are available atm is not yet created for CSNA and Remote Mounted system so most of the grafs will not work.
Grafana Example for CSNA will eventually be created in the future or you can create your own 👷

And Grafan bridge is not yet supported by IBM in production, so don’t run this in Production 😄

Downloading and Building container images and push to container registry ☁️

On the host running docker/podman perform the following steps:

  1. Clone this repository using git in your favourite directory
[root@ocp-admin grafana-scale]# git clone https://github.com/IBM/ibm-spectrum-scale-bridge-for-grafana.git grafana_bridge
Output: Cloning into 'grafana_bridge'...
remote: Enumerating objects: 298, done.
remote: Counting objects: 100% (298/298), done.
remote: Compressing objects: 100% (246/246), done.
remote: Total 430 (delta 142), reused 172 (delta 42), pack-reused 132
Receiving objects: 100% (430/430), 1.34 MiB | 0 bytes/s, done.
Resolving deltas: 100% (202/202), done.

2 . Create the bridge container image.

[root@ocp-admin grafana-scale]# cd grafana_bridge/source[root@ocp-admin source]# podman build -t bridge_image:gpfs510 .Output: 
STEP 1: FROM registry.access.redhat.com/ubi8/python-36
Getting image source signatures

Loading Images 💾

Obtain the route to the container registry

The following command is used to obtain the route to the integrated Red Hat OpenShift Container Platform image registry. If you are using a site-managed container registry, obtain and note the route to your container registry.

The HOST variable that is referenced holds the route and is referenced in the later steps to push the container images to the destination container registry.

  1. Switch to a Red Hat OpenShift Container Platform ADMIN account which has the permission to execute oc get routes:
oc login -u kubeadmin
oc login -u kubeadmin --certificate-authority=/etc/pki/ca-trust/source/anchors/a0806582.0

2. Set the HOST variable to the image registry route:

HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')

3. Validate that HOST is not empty:

[root@ocp-admin ocp45-test2]#  echo $HOSTOutput: default-route-openshift-image-registry.apps.ocp4.oslo.forum.com

Load container images and push to container registry ☁️

1. Log in to the user account created above that has access to the Red Hat OpenShift Container Platform Image Registry:

PS: I don’t have a certificate so need to point to — certificate-authority

USERNAME="user1"
oc login -u $USERNAME --certificate-authority=/etc/pki/ca-trust/source/anchors/*

2. I will use Podman to Import the Grafana container. If you don't have podman, just install it with yum install podman

USERNAME="user1"
oc login -u $USERNAME --certificate-authority=/etc/pki/ca-trust/source/anchors/a0806582.0

3. Tag the image. (Maybe use a better name then gpfs510 😄)

podman tag bridge_image:gpfs510 $HOST/ibm-spectrum-scale-ns/gpfs510

4. Push the image.

podman push $HOST/ibm-spectrum-scale-ns/gpfs510 --tls-verify=falseOutput:
Getting image source signatures
Copying blob eb7bf34352ca done
Copying blob aa333e9b2209 done

Validate the container images 💾

The following steps are only applicable when using the Red Hat OpenShift Container Platform integrated container registry.

  1. Log in to an administrator account:
oc login -u kubeadmin --certificate-authority=/etc/pki/ca-trust/source/anchors/a0806582.0

2. Ensure that the ImageStream contains the images:

for image in `oc get is -o custom-columns=NAME:.metadata.name --no-headers`; do
echo "---"
oc get is $image -o yaml | egrep "name:|dockerImageRepository"
done
Output:
---
name: gpfs510
dockerImageRepository: image-registry.openshift-image-registry.svc:5000/ibm-spectrum-scale-ns/gpfs510

IBM Spectrum Scale bridge for Grafana deployment in a k8s/OCP environment

Dependencies

This instruction could be used for:

  • IBM Spectrum Scale cloud-native (CNSA) devices having minimum release level 5.1.0.1 and above
  • IBM Spectrum Scale bridge for Grafana image, as we did earlier.
  1. Make sure you have deployed the IBM Spectrum Scale Container Native Storage Access (CNSA) cluster including the ibm-spectrum-scale-pmcollector pods. For more information about how to deploy a CNSA cluster please refer to the IBM Spectrum Scale Knowledge Center
  2. How it should look like with v 5.1.0.1 of CNSA
oc get po -o wide -n ibm-spectrum-scale-nsOutput:NAME                                           READY   STATUS    RESTARTS   AGE   IP            NODE                      NOMINATED NODE   READINESS GATES
ibm-spectrum-scale-core-4xlkx 1/1 Running 0 29d 10.33.3.242 ocp4-q2qf4-worker-2nkfk <none> <none>
ibm-spectrum-scale-core-bv5s2 1/1 Running 0 29d 10.33.3.241 ocp4-q2qf4-worker-c7j2q <none> <none>
ibm-spectrum-scale-core-jnbrz 1/1 Running 0 29d 10.33.3.240 ocp4-q2qf4-worker-ljbfk <none> <none>
ibm-spectrum-scale-gui-0 9/9 Running 0 29d 10.128.2.11 ocp4-q2qf4-worker-c7j2q <none> <none>
ibm-spectrum-scale-operator-7c86d75f7d-rntqs 1/1 Running 8 29d 10.131.0.19 ocp4-q2qf4-worker-ljbfk <none> <none>
ibm-spectrum-scale-pmcollector-0 2/2 Running 0 29d 10.128.2.10 ocp4-q2qf4-worker-c7j2q <none> <none>
ibm-spectrum-scale-pmcollector-1 2/2 Running 6 29d 10.129.2.11 ocp4-q2qf4-worker-2nkfk <none> <none>

Also, verify the service 'ibm-spectrum-scale-perf-query' is deployed and the service has a clusterIP assigned

oc get svc -n ibm-spectrum-scale-nsOutput:NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
ibm-spectrum-scale ClusterIP 172.30.205.112 <none> 22/TCP 29d
ibm-spectrum-scale-gui ClusterIP 172.30.211.232 <none> 443/TCP,80/TCP 29d
ibm-spectrum-scale-gui-callback ClusterIP None <none> 47443/TCP,47080/TCP 29d
ibm-spectrum-scale-perf-query ClusterIP 172.30.228.116 <none> 9084/TCP,9094/TCP 29d
ibm-spectrum-scale-pmcollector ClusterIP None <none> 9085/TCP,4739/TCP 29d

3. Copy or edit the content of the example_deployment_scripts directory to your favourite directory on the master node. Additionally, perform the following modifications in the files before you start with the deployment:

  • Open the example_deployment_scripts/bridge_deployment/bridge-service.yaml file with an editor and set the namespace name of your CNSS cluster project.
    Default namespace is ibm-spectrum-scale-ns. however, check your system
vi example_deployment_scripts/bridge_deployment/bridge-service.yaml

bridge-deployment.yaml

  • Edit the example_deployment_scripts/bridge_deployment/bridge-deployment.yaml and modify the image: field to point to the bridge image location, you created before.

For example, if your images are tagged latest and the container registry route is image-registry.openshift-image-registry.svc:5000/ibm-spectrum-scale-ns the image values would look like:

image: image-registry.openshift-image-registry.svc:5000/ibm-spectrum-scale-ns/gpfs510:latest

If not sure, check value with the following command:

for image in `oc get is -o custom-columns=NAME:.metadata.name --no-headers`; do
echo "---"
oc get is $image -o yaml | egrep "name:|dockerImageRepository"
done

To see the Image tag:

[root@ocp-admin deploy]# oc get isOUTPUT: Shorten down repo stringNAME     IMAGE REPOSITORY               TAGS     Updated                                                             
gpfs510 default/ibmscale-ns/gpfs510 latest About an hour ago

Mine is: image-registry.openshift-image-registry.svc:5000/ibm-spectrum-scale-ns/gpfs510:latest

vi example_deployment_scripts/bridge_deployment/bridge-deployment.yaml

In case you are pulling the image from a private Docker registry or repository you need to create a Secret based on existing credentials and put the secret name under imagePullSecrets settings.

So I needed also to change Image-PullSecrets to default for my NameSpace.
To find your secret, check with the following command.

# oc describe serviceaccount default -n ibm-spectrum-scale-nsName:                default
Namespace: ibm-spectrum-scale-ns
Labels: <none>
Annotations: <none>
Image pull secrets: default-dockercfg-vcjdm

4. Create the TLS certificate and the private Key

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/privkey.pem -out /tmp/cert.pem -subj "/CN=grafana-bridge/O=grafana-bridge"Generating a 2048 bit RSA private key
.....+++
.+++
writing new private key to '/tmp/privkey.pem'
-----

Check that the both .pem files are created in the /tmp directory

[root@ocp-admin grafana_bridge]# cat /tmp/privkey.pem
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCw6j2vnT7N+Dp/
Z7hby2rsr7DVZbRF5DOG5h+IcqJ2+L12GM1UwpQZV58AZf3F1Etmlt9833CV/c3K
PoUs++a0Xhw2RMdCaqoEF02XoI7c+1fXxWMj7T9pcY6k7237McgbZ8g91jbTj24E
SzQyqsJUs/JTRHvqAp+n9ZGqm5enyhKfxM9njV4bES1H3NHXYlCow+xM71TL1a3W
sr47KGfvAgh+ogDtWIQfyZMJhuiIEsM98lVeL5NJ9sJ744G0NCjmIoh3MxFm5OC+
Q203dFf/uPfe5ywyGXFQOJPgL3UTpnjBfK4C3bFF32Bw9Z4ZINSmdb985OrrKGRe
vXOX9s+FAgMBAAECggEAA9M8vDRRLFDmquSKNuniCPYPg72sNSqm9o65NdEMtDfS
mOAWaHPAkf+z/2U1JHbfnns6n8+Q9p1DOtE5PezAYzv5ri60hDocLPR4CAQ/soV4
s8Cf0SILEfOTmvtWTG0aH4WR7cxY6SAbx9n8afAJPZ2aarR7XWbrHs2PdbnhgI4z
uy+eO8tesvmG3I9D0qdZFSS5y+hMemP/xSbxArTsO3McljNDEOANzlijbZM6o0JX
cNkhXuM3IspLprt41z2OhUUzH0mBRm2ueUTn7fUhZ6TpFR9tSObhCOkg0NgdeBMo
HygKQWUSlm0tvB7rUwUI72yOrx29E70M1AUgEpskwQKBgQDhM0G2x85g+626qsWH
QfeuBQSVL0r9DMgYqQgnJDMoTpj5e8SU+QpXJKE/mOodFjq1PHzjurktwxu+Mfcs
QvwmBpLth5yMtYhpARR4R58jvVoN+C+aPmn/QtTdOBb9F5zA42fur7mh7P5KqVK8
evxZ/C+AzDJQDhhTw0bd4kAh1QKBgQDJHGpE/h+8M8M6l6XyQQQQ76khKejs2YY8
oEJRasw/vW9gEpwndUcBuvEVbwxn/dygIMp/L17n9vg0Vxbd5SIDck9mGB+4jiPB
0eQgUhpVLkKZUcIn19VlcaDTv5gkH1W99Be+hgdWcDofe0c0X2MtcF+EqnqQesJG
+Wqk7yAe8QKBgQC0ELnwnl7EaTkGUtnSRsr2GAkMCF6ba4brQOzF70oAZqgmg/Ix
c9fyydUs9uXrEAUtOQpbRMggcStTrrwGZiEbfpIo3xAr6lMCMtzdN9dlSlghZ1sY
p+M1OYjewaSQBjtOeAZ4cYWqlcbWiAEht+zjPqP1BlEMddi50SBu9iN1aQKBgQC/
ETgFhEoyTBtXN2x51DtAu/E7iM26+I8IWlmncIfMpvWBmSyycEGd6zXQ30gyJIXP
vFemriLEz2bQk00uU9sU2y2EGbdJaAGgywCplFdgRisP7xU/NVeQoXvisUyiRQL5
DUbhxASEousVrdHgeB+JtBGLwUvgqECbnassN+OUgQKBgACveEUJ7mt+LycqGG30
9DW6mjypXeP+5NlbrNYx9kDJl6xreNcc/N5GIHbGSy8Kt6r6BxbCNqxNX5/YAE47
nXf8MFg9M/T9E07Oos0bowUfqq9DSmXNL5Tdi2neIZJqlbtdV9ocNOYr+p365//9
apASGKqJwb5Q81x5Y9nSVzzF
-----END PRIVATE KEY-----

[root@ocp-admin grafana_bridge]# cat /tmp/cert.pem
-----BEGIN CERTIFICATE-----
MIIDNzCCAh+gAwIBAgIJAOa3NGJztCeiMA0GCSqGSIb3DQEBCwUAMDIxFzAVBgNV
BAMMDmdyYWZhbmEtYnJpZGdlMRcwFQYDVQQKDA5ncmFmYW5hLWJyaWRnZTAeFw0y
MTAxMTUxNTQzMzBaFw0yMjAxMTUxNTQzMzBaMDIxFzAVBgNVBAMMDmdyYWZhbmEt
YnJpZGdlMRcwFQYDVQQKDA5ncmFmYW5hLWJyaWRnZTCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBALDqPa+dPs34On9nuFvLauyvsNVltEXkM4bmH4hyonb4
vXYYzVTClBlXnwBl/cXUS2aW33zfcJX9zco+hSz75rReHDZEx0JqqgQXTZegjtz7
V9fFYyPtP2lxjqTvbfsxyBtnyD3WNtOPbgRLNDKqwlSz8lNEe+oCn6f1kaqbl6fK
Ep/Ez2eNXhsRLUfc0ddiUKjD7EzvVMvVrdayvjsoZ+8CCH6iAO1YhB/JkwmG6IgS
wz3yVV4vk0n2wnvjgbQ0KOYiiHczEWbk4L5DbTd0V/+4997nLDIZcVA4k+AvdROm
eMF8rgLdsUXfYHD1nhkg1KZ1v3zk6usoZF69c5f2z4UCAwEAAaNQME4wHQYDVR0O
BBYEFIXzVnPcGwnM1geK+j+Iw/N1xVDmMB8GA1UdIwQYMBaAFIXzVnPcGwnM1geK
+j+Iw/N1xVDmMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAHcsyq8U
/IiVprUp5qjksins8AKXjN2QgMPmr55bpMKps8Kgw/6BxqV8Z3rnQwxIub/UTaYb
g2hswx6EiYTfnF9kAX/BLpeFIIuOtzNH5vnRY2KSLbvzlozNdAQzMo9JHZhbdtrG
Z1749NBkWrpPLDDv61Lbp+bGswatS8cZ65DUVMnGGUSD5I1tEZJCQhGN+jgsnYyA
Ln+3Jkk002xWgEZaejWYdCfA54QnFLDyIEvnceI2+IMgLu/o8J4fiFxOGNLVMkJR
PUPiqMZqvUpzkiOD0K6wXYcr+TR+F+lFc/ppc8sP5zZhlUTKHQBUMQDM9Onn9Ywo
C9btAKCEDExkNQY=
-----END CERTIFICATE-----

5. Create the ‘grafana-bridge-secret’ secret for the TLS keys

Note: remember to change the namespace here!
oc project ibm-spectrum-scale-ns

oc create secret tls grafana-bridge-secret --key="/tmp/privkey.pem" --cert="/tmp/cert.pem"
secret/grafana-bridge-secret created

Describe the secret here:

oc describe secret grafana-bridge-secret
Name: grafana-bridge-secret
Namespace: testapp
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tlsData
====
tls.crt: 1176 bytes
tls.key: 1704 bytes

6. Change to the directory example_deployment_scripts/bridge_deployment/ and apply the following .yaml files:

bridge-deployment.yaml  
bridge-service.yaml
role_binding.yaml
role.yaml

Note: remember to change the namespace here!
oc project ibm-spectrum-scale-ns

# ls 
bridge-deployment.yaml
bridge-service.yaml
role_binding.yaml
role.yaml

7. Apply the following .yaml files to create the bridge

oc create -f role.yaml
oc create -f role_binding.yaml
oc create -f bridge-service.yaml
oc create -f bridge-deployment.yaml

8. Verify the grafana-bridge pods are up and running

oc get pod Output:NAME                                           READY   STATUS    RESTARTS   AGE
grafana-bridge-deployment-774c84c88c-85wk9 1/1 Running 0 13m
grafana-bridge-deployment-774c84c88c-9z2cs 1/1 Running 0 13m

Check if the collector manages to connect with the following command:

oc logs grafana-bridge-deployment-*[root@ocp-admin bridge_deployment]# oc logs grafana-bridge-deployment-774c84c88c-9z2cs
Connection to the collector server established successfully
Successfully retrieved MetaData

Received sensors:
CPU DiskFree GPFSFilesystem GPFSFilesystemAPI GPFSNode GPFSNodeAPI GPFSRPCS GPFSVFSX GPFSWaiters Load Memory Netstat Network TopProc
Initial cherryPy server engine start have been invoked. Python version: 3.6.8 (default, Aug 18 2020, 08:33:21)
[GCC 8.3.1 20191121 (Red Hat 8.3.1-5)], cherryPy version: 18.6.0.
server started

To get more information about pods:

oc describe pod grafana-bridge-deployment-774c84c88c-85wk

Note: If you need to delete the pods, just run delete instead of creating on the yamls.

oc delete -f role.yaml
oc delete -f role_binding.yaml
oc delete -f bridge-service.yaml
oc delete -f bridge-deployment.yaml

Remember to check and wait until Pods are removed.

# oc get pod

Grafana instance deployment in a k8s/OCP environment

Dependencies

  • RedHat community-powered Grafana operator v.3.6.0
  • Grafana instances provided with the Openshift monitoring stack (and its dashboards) are read-only. To solve this problem, you can use the RedHat community-powered Grafana operator provided by OperatorHub.
  • The Operator can deploy and manage a Grafana instance on Kubernetes and OpenShift. The following features are supported:
    — Install Grafana to a namespace
    - Configure Grafana through the custom resource
    - Import Grafana dashboards from the same or other namespaces
    - Import Grafana data sources from the same namespace
    - Install Plugins (panels)
  • The Grafana operator image could be found at quay.io/integreatly/grafana-operator:v3.6.0

Deploying Grafana instance for the IBM Spectrum Scale container-native access (CNSA) project in a k8s/OCP environment:

  1. create a new project, for example: ibm-scale-grafana
oc new-project ibm-scale-grafana

2. Navigate to OperatorHub and select the community-powered Grafana Operator. Press Continue to accept the disclaimer, press Install, and press Subscribe to accept the default configuration values and deploy to the my-grafana namespace.

NOTES: I tested this on Grafana version 3.7.
Tried also 2.0v but then we have some problems with TLS security..

Within some time, the Grafana operator will be made available in the my-grafana namespace.

You can also verify the grafana operator successfully installed in the ‘ibm-scale-grafana’ namespace using the command line

[root@ocp-admin bridge_deployment]#  oc get po -n ibm-scale-grafana
NAME READY STATUS RESTARTS AGE
grafana-operator-5789dd9447-pnb5j 1/1 Running 0 2m20s

3.Edit the Grafana Instance for CNSS (CNSA?) 😅
Change to the directory example_deployment_scripts/grafana_deployment/

Edit file grafana-instance-for-cnss.yaml

  • Change to your Grafana namespace.
  • Change also User and Password if you want.

4. Apply the grafana-instance-for-cnss.yaml file

oc create -f grafana-instance-for-cnss.yamlOutput: grafana.integreatly.org/grafana-for-cnss created

Check that your instance has been created with the following command:

oc get GrafanaOutput:
NAME AGE
grafana-for-cnss 23s

Connecting the grafana-bridge data source to the Grafana for CNSS instance

Github repo:

So now we are ready to collect some data.

The grafana-serviceaccount serviceAccount, created during Grafana instance deployment, was created alongside the Grafana instance. You need to grant the grafana-serviceaccount access rights to the ibm-spectrum-scale-operator clusterRole.

[root@ocp-admin grafana_deployment]# oc get sa
NAME SECRETS AGE
builder 2 28m
default 2 28m
deployer 2 28m
grafana-operator 2 23m
grafana-serviceaccount 2 8m40s
[root@ocp-admin grafana_deployment]# oc describe serviceAccount grafana-serviceaccount
Name: grafana-serviceaccount
Namespace: ibm-scale-grafana
Labels: <none>
Annotations: serviceaccounts.openshift.io/oauth-redirectreference.primary:
{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"grafana-route"}}
Image pull secrets: grafana-serviceaccount-dockercfg-hp24b
Mountable secrets: grafana-serviceaccount-token-rxkvf
grafana-serviceaccount-dockercfg-hp24b
Tokens: grafana-serviceaccount-token-hjdcs
grafana-serviceaccount-token-rxkvf
Events: <none>
oc adm policy add-cluster-role-to-user ibm-spectrum-scale-operator -z grafana-serviceaccountOutput:
clusterrole.rbac.authorization.k8s.io/ibm-spectrum-scale-operator added: "grafana-serviceaccount

The bearer token for this serviceAccount is used to authenticate the access to grafana-bridge dataSource in the my-grafana namespace. The following command will display this token:

Note: Change to correct namespace -n

oc serviceaccounts get-token grafana-serviceaccount -n ibm-scale-grafanaOutput: very long Bearer token. eyJhbGciOiJSUzI1NiIsImtpZCI6IjNaLUVFei1PZnptcmVXRUNRWlNwTHF3Mm5OQ0xSNnBNTWIza1dWVS03eVUifQ.eyJpc3MiOiJrdWJlc

We need to update the file grafana-bridge-datasource.yaml

  • Substitute ${BEARER_TOKEN} with the output of the command above in the grafana-bridge-datasource.yaml
  • Also ‘TLS cert ${TLS_CERT}’, ‘TLS key ${TLS_KEY}’ need to be replaced with TLS key and certificate, we have generated for the grafana-bridge.

    - This was written to cat /tmp/privkey.pem and cat /tmp/cert.pem for me.
  • Change the namespace that you create earlier: ibm-scale-grafana
  • Check and change the url so that it connects to the correct bridge url. example correct namespace.

The GrafanaDataSource deployment for the grafana-bridge datasource for me will look like this as below:

Note: Check your YAML lines and Remember the PIPES | in tls*

Create the grafana-bridge GrafanaDataSource from the YAML file

oc create -f grafana-bridge-datasource.yaml
Output: grafanadatasource.integreatly.org/bridge-grafanadatasource created

Check the GrafanaDataSource with the following command:

oc get GrafanaDataSource -n ibm-scale-grafanaOutput:
NAME AGE
bridge-grafanadatasource 62s

Check the Grafana Config.
There should be 1 in grafana-datasources

oc get configmap -n ibm-scale-grafanaOutput: 
NAME DATA
grafana-config 1
grafana-datasources 1
grafana-operator-lock 0

If the DataSource didn't get created, just try to delete the source and check your YAML file:

oc delete -f grafana-bridge-datasource.yaml

Explore Grafana WEB interface for CNSS project in a k8s/OCP environment

To get into the grafana Web Interface we can check that a route has be created

oc get routeOutput:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
grafana-route grafana-route-ibm-scale-grafana.apps.ocp4.oslo.forum.com grafana-service grafana edge None
oc describe route grafana-routeOutput:
Name: grafana-route
Namespace: ibm-scale-grafana
Created: About an hour ago
Labels: <none>
Annotations: openshift.io/host.generated=true
Requested Host: grafana-route-ibm-scale-grafana.apps.ocp4.oslo.forum.com
exposed on router default (host apps.ocp4.oslo.forum.com)
Path: <none>
TLS Termination: edge
Insecure Policy: <none>
Endpoint Port: grafana
Service: grafana-service
Weight: 100 (100%)
Endpoints: 10.128.2.111:3000

In a browser put the ‘Requested Host’ URL to open a Grafana user interface. Click on ‘Sign In’ from the bottom left menu of Grafana, and log in using the default username and password configured earlier(root/secret).

Note: Remember HTTPS:// in the URL:
https://grafana-route-ibm-scale-grafana.apps.ocp4.oslo.forum.com

Remember to sign in with the user and password you created.

Check the DataSource: Grafana-Bridge and then we need to change the OpenTSDB Settings Version to ==2.3.

(Why we can set the version is not clear, maybe a bug )

NOTE: I had some problem that the Certificates didn’t get added in, this was caused by the that the TLS Certificate and TLS KEY was not placed correctly inside YAML file.
You can see this if the TLS auth details are missing inside the Data Source.

Create a Test Dashboard to see if we get some data.

  • Press + Create and Dashboard.
  • And then New Panel.

Chose your DataSource: Grafrana-Bridge
Type in CPU_System under Metric:

And then the green graphs should be filled in with CPU load.

Now we can start Importing examples.

Note: The Examples that is available is not yet created for CSNA and Remote Mounted system so some of the grafs will not work.
Example for CSNA will eventually be created in the future or you can create your own 👷

The example is available on IBM Scale Bridge,

You need to import each of them separately, Jupp…
I have asked if we could include this in the container.

Testing the Graphs: 🏃

I will log into the worker nodes or the gpfs container and create some load for demonstration.

Note: this test will fill your filesystem.. check how much space you have.

oc rs /usr/lpp/mmfs/samples/perf/gpfsperf create rand /mnt/fs1/perftestfile1 -n 10g -r 1m -th 16 -fsync|grep Data

Here we can see what Worker node-2nkfk are writing to FS.

Let's also read the data:

/usr/lpp/mmfs/samples/perf/gpfsperf read rand /mnt/fs1/perftestfile1 -n 10g -r 1m -th 16 -fsync|grep Data

If you want to run the test over-all nodes:

sh-4.4# mmdsh -N all /usr/lpp/mmfs/samples/perf/gpfsperf create rand /mnt/fs1/perftest.\$\(hostname\) -n 5g -r 1m -th 16 -fsync|grep Dat

Dashboard: NETWORK

The NETWORK performance details.\nFor example, the *’NETWORK throughput’* graph shows you number of bytes of data sent and received by the network interface,

Final Thoughts: 🤔

Hopefully, this was informative, this is still an early version, but I would say that if we get this more integrated we will have something that is beneficial. I have created a request to have the grafana examples included in the container images, and also some example for Remote cluster and OCP installation

--

--

Ole Kristian Myklebust
Possimpible

Nerd, Loves the mountains and all that come with it. IBMer that works for IBM Lab Services. My own Words and opinion.