Multi-Cluster Observability with Red Hat ACM using Red Hat OpenShift Data Foundation(ODF)

Dinesh Lakshmanan
15 min readOct 5, 2023

--

This is similar to the blog post covered in Multi-Cluster Observability with Red Hat ACM and AWS S3. For users prefer to use AWS S3 as Object Storage, can follow the above blog post to understand the multi-cluster observability configuration using AWS S3. So, in this blog post we will primarily focus on Multi-cluster observability using Red Hat OpenShift Data Foundation(ODF) for Noobaa Object Storage instead of AWS S3.

There’s a big chance that throughout your Kubernetes journey, you’ll have to manage multiple Kubernetes clusters. Whether they’re all production-based, for dev environments, or clusters for each engineer set up in a single tenancy methodology, there will be multiple clusters. As things evolve in managing large and multi environments the need for admin teams grow to accommodate these, and new challenges arise with gaining visibility into the health of these environments. Similarly as the scale grows, so does the complexity of the job to administer and view holistically from data center to the edge.

Because of that, you’ll need a way to monitor them all in one place.

In this blog post, you’ll learn about the purpose of multi-cluster observability using Red Hat Advanced Cluster Management for Kubernetes that can help you implement it in production.

The Purpose

First, let’s understand further on the use case of Multi-Cluster Observability. For this purpose, we’ll use an example of Prometheus and Grafana being managed in single OpenShift/Kubernetes cluster, because it’s relatable for many engineers and it’s one of the most popular stacks to use for monitoring and observability in Kubernetes.

In the case of OpenShift, monitoring is provided out of the box and its easy to get either up and running in a matter of minutes to provide monitoring features such as metrics, alerts, monitoring dashboards and metric targets.

But here’s the problem — that’s for one OpenShift cluster.

What about if you have multiple OpenShift clusters? Well, not much you can do with that from management perspective. You’re sort of stuck (by default) having to manage individually per cluster, which results in multiple instances of monitoring stack to access if you want to set up alerts or check your stack. This, of course, is not a good option because it doesn’t scale. You can’t have fifty (50) clusters running 50 instances of Prometheus and 50 instances of Grafana. It’s not realistic for any highly-functioning engineering department.

You need a method to monitor and observe workloads, but do so in one place that has all of your Prometheus and Grafana configurations.

Now, of course, the above relates to any monitoring and observability platform. Remove Prometheus or Grafana and insert whatever other tool you like to use.

To overcome that issue in complex and scale environments, multi-cluster observability using Red Hat Advanced Cluster Management for kubernetes will provide an easy solution.

Multi-cluster Observability

Multicluster observability is an RHACM feature that is intended to be a central hub for metrics, alerting, and monitoring systems for all clusters, whether hub clusters or managed clusters. With Red Hat Advanced Cluster Management for Kubernetes (RHACM) version 2.4 and later, Red Hat provides centralized observability of the fleet, which is primarily focused on displaying cluster health metrics that can readily describe control plane health, cluster optimization, and cluster utilization. For example, admins can see API latency across the fleet and compare clusters for CPU/memory under utilization.

In addition, alerts are configured for centralized management, ensuring that responders are engaged directly in the tools they are expecting, such as Slack and PagerDuty. Specific alert rules can be put in place to ensure only critical alerts fire into appropriate channels.

With that let’s start with multi-cluster observability configuration.

Environment Specifications

Let’s make use of an OpenShift cluster with RHACM installed, and have the source and target clusters as managed clusters. The environment used in this example are using AWS infrastructure with the following specifications:

  • Red Hat OpenShift Container Platform 4.13 installed
  • 3 Control Plane Nodes (vCPU 4, RAM 16GB)
  • 3 Compute Nodes (vCPU 4, RAM 16GB)
  • Red Hat Advanced Cluster Management 2.8 installed
  • S3-API compatible Object Storage (OpenShift Data Foundation)

By default, observability is included with the product installation, but not enabled. Due to the requirement for persistent storage, the observability service is not enabled by default. Observability components require 2701mCPU and 11972Mi memory to install the observability service. For more details on pod capacity requests check the Observability pod capacity requests.

For persistent storage, Red Hat Advanced Cluster Management is tested with and fully supported by Red Hat OpenShift Data Foundation (formerly Red Hat OpenShift Container Storage). Apart from this, there are many supported object storage types as shown below;

  • Red Hat OpenShift Data Foundation
  • AWS S3
  • Red Hat Ceph (S3-compatible API)
  • Google Cloud Storage
  • Azure Storage
  • Red Hat on IBM Cloud

However, in this blog since I use BareMetal and AWS mix of infrastructure I will be using OpenShift Data Foundation to show how this works with Nooba Object bucket.

Important: When you configure your object store, ensure that you meet the encryption requirements that are necessary when sensitive data is persisted. The observability service uses Thanos supported, stable object stores.

Please note this blog only covers the features of multicluster observability and how to configure. Installing OpenShift Cluster and ACM 2.8 is outside the scope of this blog.

Installing the multicluster observability add-on

When installing the RHACM operator as part of the pre-requisite, you access some observability features right away — you can use the Search capability from the Overview page that shows some gauges and details about the managed clusters. This includes an add-on (search-collector) that runs on the managed cluster, which collects Kubernetes resources (CRDs, secrets, kinds, pods, storage, etc, just all the 'raw' resources that are in a cluster) and allows the user to search through these things across all the clusters. This is enabled by default, out-of-the-box.

Multicluster observability, which in this case, is specifically the collection and visualization of OpenShift with Kubernetes platform metrics and alerts, disabled by default on the RHACM hub cluster. Users must configure an objectStorage as mentioned in the environment specs section and have that ready for Thanos on the hub cluster to store all cluster platform metrics.

Here is the diagram representing multicluster observability configuration when it is enabled.

Figure 1, Multi-Cluster Observability Architecture

Observability is included in the RHACM installation, but must be enabled to use it. Follow along to enable observability:

  1. Log in to your Red Hat Advanced Cluster Management hub cluster.
  2. Create a namespace in the hub cluster for observability. We will create the namespace in the terminal; otherwise, you can also create it in the OpenShift web console UI:
$ oc create namespace open-cluster-management-observability

3. copy the pull-secret from the openshift-config namespace into the open-cluster-management-observability namespace. Run the following command:

DOCKER_CONFIG_JSON=`oc extract secret/pull-secret -n openshift-config --to=-`

Then, create the pull-secret in the open-cluster-management-observability namespace, run the following command:

oc create secret generic multiclusterhub-operator-pull-secret \
-n open-cluster-management-observability \
--from-literal=.dockerconfigjson="$DOCKER_CONFIG_JSON" \
--type=kubernetes.io/dockerconfigjson

4. After adding your pull-secret, your observability components in this namespace have credentials that are used to access images in the registry. Now it’s time to configure a connection for our S3-compatible object store. In this example, like stated before, let’s use OpenShift Data Foundation(ODF) using Noobaa Object Bucket. The first thing we need to do is create a resource yaml file that will create our object bucket. The below is an example:

$ cat << EOF > ~/noobaa-object-storage.yaml
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: obc-multiclusterobserv
spec:
generateBucketName: obc-multiclusterobserv-bucket
storageClassName: openshift-storage.noobaa.io
EOF

Once we have created our object bucket resource yaml we need to go ahead and create in our OpenShift cluster with following command:

$ oc create -f ~/noobaa-object-storage.yaml
objectbucketclaim.objectbucket.io/obc-multiclusterobserv created

Once the object bucket resource is created we can see it by listing current object buckets:

$ oc get objectbucket
NAME STORAGE-CLASS CLAIM-NAMESPACE CLAIM-NAME RECLAIM-POLICY PHASE AGE
obc-default-obc-multiclusterobserv openshift-storage.noobaa.io default obc-schmaustech Delete Bound 30s

There are some bits of information we need to gather from the object bucket that we created which we will need to configure our thanos-object-bucket resource yaml required for our Observability configuration. Those bits are found by describing the object bucket we created and the object buckets secret. First lets look at the object bucket itself:

$ oc describe objectbucket obc-default-obc-multiclusterobserv
Name: obc-default-obc-multiclusterobserv
Namespace:
Labels: app=noobaa
bucket-provisioner=openshift-storage.noobaa.io-obc
noobaa-domain=openshift-storage.noobaa.io
Annotations: <none>
API Version: objectbucket.io/v1alpha1
Kind: ObjectBucket
Metadata:
Creation Timestamp: 2021-05-01T00:12:54Z
Finalizers:
objectbucket.io/finalizer
Generation: 1
Managed Fields:
API Version: objectbucket.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"objectbucket.io/finalizer":
f:labels:
.:
f:app:
f:bucket-provisioner:
f:noobaa-domain:
f:spec:
.:
f:additionalState:
.:
f:account:
f:bucketclass:
f:bucketclassgeneration:
f:claimRef:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:uid:
f:endpoint:
.:
f:additionalConfig:
f:bucketHost:
f:bucketName:
f:bucketPort:
f:region:
f:subRegion:
f:reclaimPolicy:
f:storageClassName:
f:status:
.:
f:phase:
Manager: noobaa-operator
Operation: Update
Time: 2021-05-01T00:12:54Z
Resource Version: 4864265
Self Link: /apis/objectbucket.io/v1alpha1/objectbuckets/obc-default-obc-multiclusterobserv
UID: 9c7eddae-4453-439b-826f-f226513d78f4
Spec:
Additional State:
Account: obc-account.obc-multiclusterobserv-bucket-f6508472-4ba6-405d-9e39-881b45a7344e.608c9d05@noobaa.io
Bucketclass: noobaa-default-bucket-class
Bucketclassgeneration: 1
Claim Ref:
API Version: objectbucket.io/v1alpha1
Kind: ObjectBucketClaim
Name: obc-multiclusterobserv
Namespace: default
UID: e123d2c8-2f9d-4f39-9a83-ede316b8a5fe
Endpoint:
Additional Config:
Bucket Host: s3.openshift-storage.svc
Bucket Name: obc-multiclusterobserv-bucket-f6508472-4ba6-405d-9e39-881b45a7344e
Bucket Port: 443
Region:
Sub Region:
Reclaim Policy: Delete
Storage Class Name: openshift-storage.noobaa.io
Status:
Phase: Bound
Events: <none>

In the object bucket describe output we are specifically interested in the bucket name and the bucket host. Below lets capture the bucket name and assign it to a variable and then echo it out to confirm the variable was set correctly:

$ BUCKET_NAME=`oc describe objectbucket obc-default-obc-multiclusterobserv|grep 'Bucket Name'|cut -d: -f2|tr -d " "`
$echo $BUCKET_NAME
obc-multiclusterobserv-bucket-f6508472-4ba6-405d-9e39-881b45a7344e

Lets do the same thing for the bucket host information. Again we will assign it to a variable and then echo the variable to confirm it was set correctly:

$ BUCKET_HOST=`oc describe objectbucket obc-default-obc-multiclusterobserv|grep 'Bucket Host'|cut -d: -f2|tr -d " "`
$ echo $BUCKET_HOST
s3.openshift-storage.svc

After we gathered the bucket name and bucket host name we need to also get the access and secret keys for our bucket. These are stored in a secret file which will have the same name as the metadata name defined in our original object bucket resource file we created above. In our example the metadata name was obc-multiclusterobserv. Lets show that secret below:

$ oc get secret obc-multiclusterobserv
NAME TYPE DATA AGE
obc-multiclusterobserv Opaque 2 117s

The access and secret keys will be visible in the contents of the secret resource and we can visually see them if we get the secret but also ask for the yaml version of the output as we have done below:

$ oc get secret obc-multiclusterobserv -o yaml
apiVersion: v1
data:
AWS_ACCESS_KEY_ID: V3M2TmpGdWVLd3Vjb2VoTHZVTUo=
AWS_SECRET_ACCESS_KEY: ck4vOTBaM2NkZWJvOVJLQStaYlBsK3VveWZOYmFpN0s0OU5KRFVKag==
kind: Secret
metadata:
creationTimestamp: "2021-05-01T00:12:54Z"
finalizers:
- objectbucket.io/finalizer
labels:
app: noobaa
bucket-provisioner: openshift-storage.noobaa.io-obc
noobaa-domain: openshift-storage.noobaa.io
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:AWS_ACCESS_KEY_ID: {}
f:AWS_SECRET_ACCESS_KEY: {}
f:metadata:
f:finalizers:
.: {}
v:"objectbucket.io/finalizer": {}
f:labels:
.: {}
f:app: {}
f:bucket-provisioner: {}
f:noobaa-domain: {}
f:ownerReferences:
.: {}
k:{"uid":"e123d2c8-2f9d-4f39-9a83-ede316b8a5fe"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:type: {}
manager: noobaa-operator
operation: Update
time: "2021-05-01T00:12:54Z"
name: obc-multiclusterobserv
namespace: default
ownerReferences:
- apiVersion: objectbucket.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: ObjectBucketClaim
name: obc-multiclusterobserv
uid: e123d2c8-2f9d-4f39-9a83-ede316b8a5fe
resourceVersion: "4864261"
selfLink: /api/v1/namespaces/default/secrets/obc-multiclusterobserv
uid: eda5cd99-dc57-4c7b-acf3-377343d6fef8
type: Opaque

The access and secret keys are base64 encoded so we need to ensure we gather them from a decoded perspective. As we did with the bucket name and bucket host we will assign them to variables. First lets pull out the access key from the yaml, decode it and then assign it to a variable and confirm the variable has the access key content:

$ AWS_ACCESS_KEY_ID=`oc get secret obc-multiclusterobserv -o yaml|grep -m1 AWS_ACCESS_KEY_ID|cut -d: -f2|tr -d " "| base64 -d`
$ echo $AWS_ACCESS_KEY_ID
Ws6NjFueKwucoehLvUMJ

We will do the same for the secret key and verify again:

$ AWS_SECRET_ACCESS_KEY=`oc get secret obc-multiclusterobserv -o yaml|grep -m1 AWS_SECRET_ACCESS_KEY|cut -d: -f2|tr -d " "| base64 -d`
$ echo $AWS_SECRET_ACCESS_KEY
rN/90Z3cdebo9RKA+ZbPl+uoyfNbai7K49NJDUJj

Now that we have our four variables which contain the values for the bucket name, bucket host, access key and secret key we are now ready to create our thanos-object-storage resource yaml file which we will need to start the configuration and deployment of the Red Hat Advanced Cluster Management Observability component. This file provides the observability service the information about the S3 object storage. Below is how we can create the file noting that the variables will substitute in the values for resource definition:

$ cat << EOF > ~/thanos-object-storage.yaml
apiVersion: v1
kind: Secret
metadata:
name: thanos-object-storage
type: Opaque
stringData:
thanos.yaml: |
type: s3
config:
bucket: $BUCKET_NAME
endpoint: $BUCKET_HOST
insecure: false
access_key: $AWS_ACCESS_KEY_ID
secret_key: $AWS_SECRET_ACCESS_KEY
trace:
enable: true
http_config:
insecure_skip_verify: true
EOF

At this point we can go ahead and create the thanos-object-storage resource from the yaml file we created:

$ oc create -f thanos-object-storage.yaml -n open-cluster-management-observability
secret/thanos-object-storage created

Once the thanos-object-storage resource is created we can create a multiclusterobservability resource yaml file like the example below. Notice that it references the thanos-object-storage resource we created above:

You can now enable the multicluster observability add-on. From the OpenShift Container Platform console navigation menu, select Installed Operators and select the RHACM operator (open-cluster-management project). Navigate to MultiClusterObservability and click Create instance:

Figure 2, Observability Operator Installation

Optionally, you can also enable multicluster observability by creating a CR. Create the MultiClusterObservability custom resource YAML file named multiclusterobservability_cr.yaml.

Please note, you must define a storage class in the MultiClusterObservability custom resource, if there is no default storage class specified. We can do that by patching one of the storage class setting to default.

apiVersion: observability.open-cluster-management.io/v1beta2
kind: MultiClusterObservability
metadata:
name: observability
spec:
observabilityAddonSpec: {}
storageConfig:
metricObjectStorage:
name: thanos-object-storage
key: thanos.yaml

Apply the observability YAML to your cluster by running the following command:

oc apply -f multiclusterobservability_cr.yaml

If everything goes smoothly, a Grafana link is displayed in the RHACM Overview page: Wait until the observability instance status is Ready.

Figure 3, Observability Instance Creation

All the pods in open-cluster-management-observability namespace for Thanos, Grafana and Alertmanager are created. All the managed clusters connected to the Red Hat Advanced Cluster Management hub cluster are enabled to send metrics back to the Red Hat Advanced Cluster Management Observability service.

$ oc get po -n open-cluster-management-observability
NAME READY STATUS RESTARTS AGE
observability-alertmanager-0 3/3 Running 0 30h
observability-alertmanager-1 3/3 Running 0 30h
observability-alertmanager-2 3/3 Running 0 30h
observability-grafana-685b47bb47-knwpm 3/3 Running 0 30h
observability-grafana-685b47bb47-p98v4 3/3 Running 0 30h
observability-observatorium-api-689c5d58db-5fpjq 1/1 Running 0 30h
observability-observatorium-api-689c5d58db-fq4ff 1/1 Running 0 30h
observability-observatorium-operator-5d49b47665-fqsrh 1/1 Running 0 30h
observability-rbac-query-proxy-5dfbbf9b58-k4gzn 2/2 Running 0 30h
observability-rbac-query-proxy-5dfbbf9b58-sbxvs 2/2 Running 0 30h
observability-thanos-compact-0 1/1 Running 0 30h
observability-thanos-query-c954b6f9c-mjj9g 1/1 Running 0 30h
observability-thanos-query-c954b6f9c-xwwvk 1/1 Running 0 30h
observability-thanos-query-frontend-6d447f44fd-9pfgl 1/1 Running 0 30h
observability-thanos-query-frontend-6d447f44fd-l8rnp 1/1 Running 0 30h
observability-thanos-query-frontend-memcached-0 2/2 Running 0 30h
observability-thanos-query-frontend-memcached-1 2/2 Running 0 30h
observability-thanos-query-frontend-memcached-2 2/2 Running 0 30h
observability-thanos-receive-controller-765d645cd6-kbcjb 1/1 Running 0 30h
observability-thanos-receive-default-0 1/1 Running 0 30h
observability-thanos-receive-default-1 1/1 Running 0 30h
observability-thanos-receive-default-2 1/1 Running 0 30h
observability-thanos-rule-0 2/2 Running 0 30h
observability-thanos-rule-1 2/2 Running 0 30h
observability-thanos-rule-2 2/2 Running 0 30h
observability-thanos-store-memcached-0 2/2 Running 0 30h
observability-thanos-store-memcached-1 2/2 Running 0 30h
observability-thanos-store-memcached-2 2/2 Running 0 30h
observability-thanos-store-shard-0-0 1/1 Running 0 30h
observability-thanos-store-shard-1-0 1/1 Running 0 30h
observability-thanos-store-shard-2-0 1/1 Running 0 30h

Validate that the observability service is enabled and the data is populated by launching the Grafana dashboards. Click the Grafana link that is near the console header, from either the console Overview page or the Clusters page.

Note: If you want to exclude specific managed clusters from collecting the observability data, add the following cluster label to your clusters: observability: disabled.

The observability service is now enabled. After you enable the observability service, the following functions are initiated:

  • All the alert managers from the managed clusters are forwarded to the Red Hat Advanced Cluster Management hub cluster.
  • All the managed clusters that are connected to the Red Hat Advanced Cluster Management hub cluster are enabled to send alerts back to the Red Hat Advanced Cluster Management observability service. You can configure the Red Hat Advanced Cluster Management Alertmanager to take care of deduplicating, grouping, and routing the alerts to the correct receiver integration such as email, PagerDuty, or OpsGenie. You can also handle silencing and inhibition of the alerts.

Now you can see, upon navigating to Main Menu|Infrastructure , a route for Grafana’s observability dashboard.

Figure 4, ACM Dashboard with Grafana link

Click on Grafana to see some great dashboards that aggregate metrics that come from multiple clusters. In the figure below, you can see the Clusters overview which are being managed by ACM. In this case, We are seeing 2 clusters in the Grafana dashboard. This is because I added 2 clusters to be managed by ACM, called local-cluster(hub-cluster) and managed-cluster.

Figure 5, MultiClusterObservability dashboard

Now, you can count on this amazing ACM feature to help you and your organization monitor all your Kubernetes managed clusters from a central pane, independent of the infrastructure or cloud provider they are running over. In the next subsection, we will show you an option that gives you even more control over your cluster.

Configuring AlertManager to send alerts

As we have seen so far, observability can be a great ally for monitoring all your clusters from a central view, but now we will go even further and show you the icing on the cake, that will be one thing more to help you to manage your clusters.

As shown in Figure 1, AlertManager is a resource that is part of the observability architecture. We will show a sample now that you can use to enable this feature and get alerts from all managed clusters.

AlertManager is a tool that can send alerts to a set of other systems, such as email, PagerDuty, Opsgenie, WeChat, Telegram, Slack, and also your custom webhooks. For this example, we will use Slack, a short-messaging tool, as a receiver for all of our alerts.

Prerequisites, First, you will need the Slack app to set up alerts, and then point to https://api.slack.com/messaging/webhooks and follow the instructions to create and configure a channel. When you finish configuring the Slack app, you will get a webhook endpoint similar to the following:https://hooks.slack.com/services/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. Save the webhook address in a safe place as it will be used in the next steps.

Configuring AlertManager

After you enable observability, alerts from your OpenShift Container Platform managed clusters are automatically sent to the hub cluster. You can use the alertmanager-config YAML file to configure alerts with an external notification system.

To configure AlertManager, you will need to create a new file named alertmanager.yaml. This file will have the webhook that you saved previously. View the following example of the alertmanager-config YAML file:

global:
slack_api_url: 'https://hooks.slack.com/services/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' #[1]
route:
receiver: 'slack-notifications' #[2]
group_by: [alertname, datacenter, app]
match:
severity: critical|warning #[3]
receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#alertmanager-service' #[4]
send_resolved: true
icon_url: 'https://avatars3.githubusercontent.com/u/3380462
title: |-
[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .CommonLabels.alertname }} for {{ .CommonLabels.job }}
{{- if gt (len .CommonLabels) (len .GroupLabels) -}}
{{" "}}(
{{- with .CommonLabels.Remove .GroupLabels.Names }}
{{- range $index, $label := .SortedPairs -}}
{{ if $index }}, {{ end }}
{{- $label.Name }}="{{ $label.Value -}}"
{{- end }}
{{- end -}}
)
{{- end }}
text: >-
{{ range .Alerts -}}
*Alert:* {{ .Annotations.title }}{{ if .Labels.severity }} - `{{ .Labels.severity }}`{{ end }}
*Description:* {{ .Annotations.description }} *Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}

In the preceding code, we have highlighted some parts with numbers. Let’s take a look:

#[1]: Webhook Slack API URL
#[2]: Name of the receiver for alerts
#[3]: Filter critical or warning alerts
#[4]: Slack channel inside the workspace

The next step is to apply the new alertmanager.yaml file to the ACM observability namespace:

$ oc -n open-cluster-management-observability create secret generic alertmanager-config --from-file=alertmanager.yaml --dry-run=client -o=yaml |  oc -n open-cluster-management-observability replace secret --filename=-

The alertmanager.yaml file must be in the same execution directory. Wait until the new AlertManager pods are created and you will receive new [Firing] or [Resolved] alerts on the configured channel. See an example in the following screenshot:

Figure 6, AlertManager multicluster alerts

Here we go; we have our AlertManager set and sending alerts to a Slack channel. Therefore, in this blog, you have seen the observability feature, from the installation to configuration and use with the help of Red Hat Advanced Cluster Management using Red Hat OpenShift Data Foundation(ODF) for Object Storage. This should help you in your multi-cluster journey to monitor all your clusters including edge use cases, no matter which provider they are running in.

--

--