Central Prometheus cluster with an example using remote_write
We all have an idea on how to install Prometheus in a Kubernetes cluster using helm charts. But, when we take a look at enterprise level scenario, it will most likely be very different, as there would probably be multiple Kubernetes clusters being monitored and there would be a dedicated Kubernetes cluster for observability related stuff like monitoring, logging etc.
Now, In this blog, we will see an example showing how we can this can be achieved on a small scale. We will be creating 2 kind clusters clustera
and clusterb
. We will make it such that metrics from clustera
are sent to clusterb
and this will be done using remote_write
. There will be another blog later to see how we can bring in high availability, long term storage of metrics, indexing, scalability to the instances where all the metrics will be gathered.
Hands-On:
Without any delay, let’s start with the example. Below is the kind config used for creating clusters clustera
and clusterb
. If you want to know more about kind then take a look at the blog here.
# clustera-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 31000
hostPort: 31000
protocol: TCP
- containerPort: 9090
hostPort: 9090
protocol: TCP
# - role: worker
===================================
# clusterb-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 32000
hostPort: 32000
protocol: TCP
- containerPort: 9091
hostPort: 9091
protocol: TCP
# - role: worker
Create clusters :
$ kind create cluster --name clustera --config clustera-config.yaml
Creating cluster "clustera" ...
✓ Ensuring node image (kindest/node:v1.27.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-clustera"
You can now use your cluster with:
kubectl cluster-info --context kind-clustera
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
$ kind create cluster --name clusterb --config clusterb-config.yaml
Creating cluster "clusterb" ...
✓ Ensuring node image (kindest/node:v1.27.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-clusterb"
You can now use your cluster with:
kubectl cluster-info --context kind-clusterb
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
Now that both the clusters are created we can interact with them using the kubectl config use-context kind-<clustername>
command as follows:
$ kubectl config use-context kind-clustera
Switched to context "kind-clustera".
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
clustera-control-plane Ready control-plane 8m58s v1.27.1
$ kubectl config use-context kind-clusterb
Switched to context "kind-clusterb".
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
clusterb-control-plane Ready control-plane 25m v1.27.1
Now, lets pull the Prometheus chart first like this:
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" already exists with the same configuration, skipping
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "enapter" chart repository
...Successfully got an update from the "open-telemetry" chart repository
Update Complete. ⎈Happy Helming!⎈
$ helm pull prometheus-community/prometheus
Now that we have the Prometheus helm chart, we can extract the contents and install it. After extracting the contents, create 2 copies of values file and rename them as values-clustera.yaml
and values-clusterb.yaml
so that it looks as follows:
$ cd prometheus-25.21.0/prometheus/
$ ls
Chart.lock charts Chart.yaml ci README.md templates values-clustera.yaml values-clusterb.yaml values.schema.json values.yaml
Let us install the chart in clustera:
$ kubectl config use-context kind-clustera
Switched to context "kind-clustera".
$ helm install prometheus . -f values-clustera.yaml
NAME: prometheus
LAST DEPLOYED: Sat Jun 15 14:46:18 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-server.default.svc.cluster.local
Get the Prometheus server URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=prometheus,app.kubernetes.io/instance=prometheus" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090
The Prometheus alertmanager can be accessed via port 9093 on the following DNS name from within your cluster:
prometheus-alertmanager.default.svc.cluster.local
Get the Alertmanager URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=alertmanager,app.kubernetes.io/instance=prometheus" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9093
#################################################################################
###### WARNING: Pod Security Policy has been disabled by default since #####
###### it deprecated after k8s 1.25+. use #####
###### (index .Values "prometheus-node-exporter" "rbac" #####
###### . "pspEnabled") with (index .Values #####
###### "prometheus-node-exporter" "rbac" "pspAnnotations") #####
###### in case you still need it. #####
#################################################################################
The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
prometheus-prometheus-pushgateway.default.svc.cluster.local
Get the PushGateway URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus-pushgateway,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9091
For more information on running Prometheus, visit:
https://prometheus.io/
$ kubectl get po
NAME READY STATUS RESTARTS AGE
prometheus-alertmanager-0 1/1 Running 0 4m34s
prometheus-kube-state-metrics-5bcd69f6ff-t8xs2 1/1 Running 0 4m34s
prometheus-prometheus-node-exporter-4mb7v 1/1 Running 0 4m34s
prometheus-prometheus-pushgateway-ccb848649-ll556 1/1 Running 0 4m34s
prometheus-server-648bbf8684-wn627 2/2 Running 0 4m34s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16m
prometheus-alertmanager ClusterIP 10.96.98.217 <none> 9093/TCP 4m52s
prometheus-alertmanager-headless ClusterIP None <none> 9093/TCP 4m52s
prometheus-kube-state-metrics ClusterIP 10.96.191.43 <none> 8080/TCP 4m52s
prometheus-prometheus-node-exporter ClusterIP 10.96.91.108 <none> 9100/TCP 4m52s
prometheus-prometheus-pushgateway ClusterIP 10.96.240.82 <none> 9091/TCP 4m52s
prometheus-server ClusterIP 10.96.139.186 <none> 80/TCP 4m52s
In clusterb as well:
Before installing the chart, let us make some changes in values-clusterb.yaml
file to tell Prometheus that it is a receiver:
server:
extraArgs:
web.enable-remote-write-receiver: ""
Now, we can proceed with the installation:
$ kubectl config use-context kind-clusterb
Switched to context "kind-clusterb".
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
clusterb-control-plane Ready control-plane 25m v1.27.1
$ helm install prometheus . -f values-clusterb.yaml
NAME: prometheus
LAST DEPLOYED: Sat Jun 15 14:53:56 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-server.default.svc.cluster.local
Get the Prometheus server URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=prometheus,app.kubernetes.io/instance=prometheus" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090
The Prometheus alertmanager can be accessed via port 9093 on the following DNS name from within your cluster:
prometheus-alertmanager.default.svc.cluster.local
Get the Alertmanager URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=alertmanager,app.kubernetes.io/instance=prometheus" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9093
#################################################################################
###### WARNING: Pod Security Policy has been disabled by default since #####
###### it deprecated after k8s 1.25+. use #####
###### (index .Values "prometheus-node-exporter" "rbac" #####
###### . "pspEnabled") with (index .Values #####
###### "prometheus-node-exporter" "rbac" "pspAnnotations") #####
###### in case you still need it. #####
#################################################################################
The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
prometheus-prometheus-pushgateway.default.svc.cluster.local
Get the PushGateway URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus-pushgateway,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9091
For more information on running Prometheus, visit:
https://prometheus.io/
We can access the Prometheus UI by doing a simple port-forward
:
$ kubectl port-forward svc/prometheus-server 8081:80
Forwarding from 127.0.0.1:8081 -> 9090
Forwarding from [::1]:8081 -> 9090
Now we will makes changes to push metrics of clustera
to Promtheus server in clusterb
. First, we need to create a svc of type Loadbalancer
or Nodeport
. We will be going with Nodeport
for our kind cluster scenario. The yaml file for the svc is like this:
apiVersion: v1
kind: Service
metadata:
name: prometheus-central-lb
spec:
type: NodePort
ports:
- port: 9090
targetPort: 9090
nodePort: 32000
protocol: TCP
name: http
selector:
app.kubernetes.io/component: server
app.kubernetes.io/instance: prometheus
app.kubernetes.io/name: prometheus
Let’s create the svc in clusterb
:
$ kubectl apply -f prometheus-central-svc.yaml
service/prometheus-central-lb configured
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43m
prometheus-alertmanager ClusterIP 10.96.119.174 <none> 9093/TCP 17m
prometheus-alertmanager-headless ClusterIP None <none> 9093/TCP 17m
prometheus-central-lb NodePort 10.96.173.53 <none> 9090:32000/TCP 2m38s
prometheus-kube-state-metrics ClusterIP 10.96.252.43 <none> 8080/TCP 17m
prometheus-prometheus-node-exporter ClusterIP 10.96.202.216 <none> 9100/TCP 17m
prometheus-prometheus-pushgateway ClusterIP 10.96.174.98 <none> 9091/TCP 17m
prometheus-server ClusterIP 10.96.46.211 <none> 80/TCP 17m
Now, we can see the IP of the svc 10.96.173.53
and the port 32000
. We need to use it in the values-clustera.yaml
file. Search for remote_write in values-clustera.yaml
file and add the config so that it looks like this:
remoteWrite:
- url: "http://clusterb-control-plane:32000/api/v1/write"
Now do a helm upgrade in clustera
:
$ kubectl config use-context kind-clustera
Switched to context "kind-clustera".
$ helm upgrade prometheus . -f values-clustera.yaml
Release "prometheus" has been upgraded. Happy Helming!
NAME: prometheus
LAST DEPLOYED: Sat Jun 15 15:15:35 2024
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-server.default.svc.cluster.local
Get the Prometheus server URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=prometheus,app.kubernetes.io/instance=prometheus" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090
The Prometheus alertmanager can be accessed via port 9093 on the following DNS name from within your cluster:
prometheus-alertmanager.default.svc.cluster.local
Get the Alertmanager URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=alertmanager,app.kubernetes.io/instance=prometheus" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9093
#################################################################################
###### WARNING: Pod Security Policy has been disabled by default since #####
###### it deprecated after k8s 1.25+. use #####
###### (index .Values "prometheus-node-exporter" "rbac" #####
###### . "pspEnabled") with (index .Values #####
###### "prometheus-node-exporter" "rbac" "pspAnnotations") #####
###### in case you still need it. #####
#################################################################################
The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
prometheus-prometheus-pushgateway.default.svc.cluster.local
Get the PushGateway URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus-pushgateway,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9091
For more information on running Prometheus, visit:
https://prometheus.io/
That’s it, we will start seeing the metrics sent to Prometheus server in clusterb
through the Nodeport service. As shown below, you will see logs that show that remote_write
is happening to the configured URL.
$ kubectl logs -f prometheus-server-648bbf8684-2dp5f -c prometheus-server
....
ts=2024-06-18T09:56:41.056Z caller=dedupe.go:112 component=remote level=info remote_name=fe2c45 url=http://clusterb-control-plane:32000/api/v1/write msg="Done replaying WAL" duration=5.212641868s
....
In case of any issues like connectivity, we can check by spinning up a pod and executing wget
or curl
commands in clustera
:
$ kubectl run test-pod --rm -it --image=busybox -- /bin/sh
# wget -qO- http://clusterb-control-plane:32000/metrics
We can add several other options to remote_write
for example:
Authentication and Authorization:
remoteWrite:
- url: "http://clusterb-control-plane:32000/api/v1/write"
basic_auth:
username: <your-username>
password: <your-password>
TLS Configuration:
remoteWrite:
- url: "https://clusterb-control-plane:32000/api/v1/write"
tls_config:
ca_file: /etc/prometheus/secrets/ca.crt
cert_file: /etc/prometheus/secrets/client.crt
key_file: /etc/prometheus/secrets/client.key
Remote Write Queue Configuration:
remoteWrite:
- url: "http://clusterb-control-plane:32000/api/v1/write"
queue_config:
capacity: 10000
max_shards: 200
min_shards: 1
max_samples_per_send: 100
batch_send_deadline: 5s
min_backoff: 30ms
max_backoff: 100ms
and many more. Hope you had a better understanding of this concept after looking at a simple example.