CRD Support and Workload Monitoring with Azure Managed Prometheus

Rashmi Chandrashekar
Microsoft Azure
Published in
6 min readApr 15, 2024

Azure Monitor managed service for Prometheus is a component of Azure Monitor, that brings the best of open-source ecosystem and Microsoft’s expertise of at scale monitoring of Kubernetes clusters together.

While Azure Managed Prometheus is available as a native AKS addon and Azure ARC extension, it is a managed implementation of open source platform.

Prometheus is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully managed Prometheus-compatible monitoring solution in Azure.

While managing a small scale cluster with Prometheus is relatively simple, as the size of the applications and the clusters grows, managing Prometheus can be a task in itself. More often than not, it is seen as a daunting task that many infrastructure owners struggle with and spend most of their time on. Luckily, Microsoft offers a managed offering for Prometheus which works seamlessly with Azure Kubernetes Service and Azure ARC. Along with that, you can visualize data with Azure Managed Grafana and enabling Azure Managed Prometheus on a Kubernetes cluster provides out of box dashboards which are on par with the OSS community dashboards for monitoring your Kubernetes infrastructure. All of these can be easily done with a set of simple onboarding steps from a wide variety of platforms like ARM, Bicep, Azure Portal etc.

You can read more about Azure Managed Prometheus and how to onboard in the links.

Microsoft announced the GA of Azure Managed Prometheus back in May 2023 . Since then it has been increasingly used by customers to monitor their Kubernetes clusters for infrastructure as well as workloads effectively.

Azure Managed Prometheus support for Pod and Service Monitors

To make working with Managed Prometheus easier and the transition from OSS Prometheus Operator seamless, Microsoft recently announced the support for Custom Resource Definitions which have the same schema as OSS Prometheus Operator CRDs for pod and service monitors.

With this, you can easily transition from OSS self managed Prometheus to Azure Managed Prometheus without the hassle of converting the scrape configs from pod and service monitors to any other format. Since the Custom Resources for Pod and Service Monitors are widely available and well documented today, onboarding to Azure Managed Prometheus is that much smoother.

Here’s an example of how you would configure Azure Managed Prometheus pod and/or service monitors to scrape metrics.

Create a sample application

Deploy a sample application exposing prometheus metrics to be configured by pod/service monitor.

kubectl apply -f https://raw.githubusercontent.com/Azure/prometheus-collector/main/internal/referenceapp/prometheus-reference-app.yaml

Create a pod monitor or service monitor

Create one of the custom resources(pod monitor or service monitor) below to enable scraping of metrics from the sample application exposing Prometheus metrics created in the previous step.

Pod Monitor

kubectl apply -f https://raw.githubusercontent.com/Azure/prometheus-collector/main/otelcollector/deploy/example-custom-resources/pod-monitor/pod-monitor-reference-app.yaml

Service Monitor

kubectl apply -f https://raw.githubusercontent.com/Azure/prometheus-collector/main/otelcollector/deploy/example-custom-resources/service-monitor/service-monitor-reference-app.yaml

Once this is done, you will be able to see the metrics in your Grafana instance.

Customize scrape configuration using relabeling

Targets discovered using pod and service monitors have different __meta_* labels depending on what monitor is used. You can use the labels in the relabelings section to filter targets or replace labels for the targets.

Relabelings

The relabelings section is applied at the time of target discovery and applies to each target for the job. The following examples show ways to use relabelings.

Add a label

Add a new label called example_label with the value example_value to every metric of the job. Use __address__ as the source label only because that label always exists and adds the label for every target of the job.

relabelings:
- sourceLabels: [__address__]
targetLabel: example_label
replacement: 'example_value'

Use Pod or Service Monitor labels

Targets discovered using pod and service monitors have different __meta_* labels depending on what monitor is used. The __* labels are dropped after discovering the targets. To filter by using them at the metrics level, first keep them using relabelings by assigning a label name. Then use metricRelabelings to filter.

# Use the kubernetes namespace as a label called 'kubernetes_namespace'
relabelings:
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace

# Keep only metrics with the kubernetes namespace 'default'
metricRelabelings:
- sourceLabels: [kubernetes_namespace]
action: keep
regex: 'default'

Job and instance relabeling

You can change the job and instance label values based on the source label, just like any other label.

# Replace the job name with the pod label 'k8s app'
relabelings:
- sourceLabels: [__meta_kubernetes_pod_label_k8s_app]
targetLabel: job

# Replace the instance name with the node name. This is helpful to replace a node IP
# and port with a value that is more readable
relabelings:
- sourceLabels: [__meta_kubernetes_node_name]]
targetLabel: instance

Metric Relabelings

Metric relabelings are applied after scraping and before ingestion. Use the metricRelabelings section to filter metrics after scraping. The following examples show how to do so.

Drop metrics by name

# Drop the metric named 'example_metric_name'
metricRelabelings:
- sourceLabels: [__name__]
action: drop
regex: 'example_metric_name'

Filter metrics by labels

# Keep metrics only where example_label = 'example'
metricRelabelings:
- sourceLabels: [example_label]
action: keep
regex: 'example'
# Keep metrics only if `example_label` exists as a label
metricRelabelings:
- sourceLabels: [example_label_1]
action: keep
regex: '.+'

To author your own pod/service monitor you can use the templates here.

Please make sure to use the labelLimit, labelNameLengthLimit and labelValueLengthLimit specified in the templates so that they are not dropped during processing.

Full documentation links below

Azure Managed Prometheus CRD support
Customize collection configuration
Create Pod and Service Monitors

Workload Monitoring with Azure Managed Prometheus

With the newly released support for CRD, workload monitoring with Azure Managed Prometheus is relatively simple. Microsoft offers a list of ‘recipes’ for monitoring common workloads with which you can monitor and get insights on your workloads in the matter of just a few minutes.

These steps use the OSS Prometheus exporter helm charts and OSS dashboards to light up workload monitoring scenarios.

By following these simple steps, you can get Azure Managed Prometheus scraping Prometheus metrics from a wide variety of workloads for messaging, CI/CD, distributed search in no time.

Here are a few integrations that are documented today –

Apache Kafkahttps://learn.microsoft.com/en-us/azure/azure-monitor/containers/prometheus-kafka-integration

Elastic Search https://learn.microsoft.com/en-us/azure/azure-monitor/containers/prometheus-elasticsearch-integration

Argo CDhttps://learn.microsoft.com/en-us/azure/azure-monitor/containers/prometheus-argo-cd-integration

This list is continually evolving and more workloads are added to the list to make it easy for customers to monitor workloads.

Additionally, if you want to monitor any other workload, you can easily reuse any OSS pod or service monitor and configure Azure managed Prometheus to scrape Prometheus metrics. Since the new support allows for seamless integration with OSS resources, the only change you would need is to update the api version.
Read more about this here — Create Pod and Service Monitors

You can also see all the new exciting features by Microsoft for Azure Managed Prometheus and subscribe to stay informed.

--

--