I replaced several Kubernetes components with Grafana Agent Flow. Here’s how.

Štěpán Vraný
7 min readSep 15, 2023

--

Lately, I’ve been thinking that maybe I don’t need to fuss with separate tools for metrics and logs. To give you some context, I had Grafana Agent handling my metrics and Promtail taking care of the logs.

I was poking around in the documentation, and guess what I stumbled upon? There’s this new mode for Grafana Agent called “flow.” It’s basically just like the regular Grafana Agent, but it has a way more user-friendly configuration language. Honestly, let’s not beat around the bush here — dealing with Prometheus YAML configuration can be a real headache, and setting it up has to be one of the least fun things I’ve ever done.

They’ve got this new config format they’re calling “river,” and it’s got a vibe that’s kinda like the HCL config language, which you might remember from Terraform — you know, the one that’s famous for being… well, you know.

Flow configuration is all about components and reusability. Take pods discovery, for instance — you tackle it once, and then you’re good to roll with it across different parts of your setup. I was pretty stoked about it, so I went ahead and decided to give the Grafana Agent config a complete makeover in that River format. You know, just felt like the right move.

Metrics

Let's start with the collection of prometheus metrics. Later on we'll also explore some useful components Grafana Agent comes with. Below we’ve got the lowdown on some system metrics — you know, the usual suspects for keeping tabs on CPU, memory, and all that.

We’re doing some nifty transformations here because, well, you can’t just scrape certain metrics directly. It’s pretty much in line with what you’d do in a regular Prometheus config, no big surprises.

But here’s where it gets juicy — that “forward_to” thing is the real deal. In our setup, we’re just sending the transformed data over to a single Prometheus instance, but guess what? You’ve got the freedom to ship metrics to a different Prometheus setup or workspace if you fancy it.

      prometheus.remote_write "default" {
endpoint {
url = "http://prometheus.local:9090/api/v1/write"
}
}

logging {
level = "info"
format = "logfmt"
}

discovery.kubernetes "nodes" {
role = "node"
}

// cadvisor
prometheus.scrape "cadvisor" {
scheme = "https"
tls_config {
server_name = "kubernetes"
ca_file = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
insecure_skip_verify = false
}
bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
targets = discovery.relabel.metrics_cadvisor.output
scrape_interval = "60s"
forward_to = [prometheus.remote_write.default.receiver]
}

discovery.relabel "metrics_cadvisor" {
targets = discovery.kubernetes.nodes.targets

rule {
action = "replace"
target_label = "__address__"
replacement = "kubernetes.default.svc.cluster.local:443"
}

rule {
source_labels = ["__meta_kubernetes_node_name"]
regex = "(.+)"
action = "replace"
replacement = "/api/v1/nodes/$${1}/proxy/metrics/cadvisor"
target_label = "__metrics_path__"
}
}

// kubelet
discovery.relabel "metrics_kubelet" {
targets = discovery.kubernetes.nodes.targets

rule {
action = "replace"
target_label = "__address__"
replacement = "kubernetes.default.svc.cluster.local:443"
}

rule {
source_labels = ["__meta_kubernetes_node_name"]
regex = "(.+)"
action = "replace"
replacement = "/api/v1/nodes/$${1}/proxy/metrics"
target_label = "__metrics_path__"
}
}

prometheus.scrape "kubelet" {
scheme = "https"
tls_config {
server_name = "kubernetes"
ca_file = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
insecure_skip_verify = false
}
bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
targets = discovery.relabel.metrics_kubelet.output
scrape_interval = "60s"
forward_to = [prometheus.remote_write.default.receiver]
}

How about Pods' metrics?
So, you’ve got a couple of ways to play this. You can roll with the whole Kubernetes pod discovery thing, or you can take the shortcut and set up a service monitor discovery right here in the Grafana Agent. Guess which path I took? Yep, you got it!

      // servicemonitor
prometheus.operator.servicemonitors "services" {
forward_to = [prometheus.remote_write.default.receiver]
}

With this component in place I can just create a ServiceMonitor resource and that's it.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: rabbitmq
spec:
endpoints:
- port: prometheus
scheme: http
path: /metrics
interval: 15s
scrapeTimeout: 14s
- port: prometheus
scheme: http
path: /metrics/detailed
params:
family:
- queue_coarse_metrics
- queue_metrics
interval: 15s
scrapeTimeout: 14s
selector:
matchLabels:
app.kubernetes.io/component: rabbitmq

I gotta confess, in the past, servicemonitors didn’t really float my boat. But hey, check this out — they’re actually kinda cool, and in this specific scenario, they don’t make things any more complicated. Plus, they make the scraping super clear-cut and on point. 🚀😉

Logging

Grafana Agent can also do some log scraping, just like Promtail. To pull this off, you’ll need to mount the host’s /var/log to the Grafana Agent, but don’t sweat it, I’ll walk you through the how-to in the wrap section.

Like I mentioned earlier, you can use the same discovery pattern for pods as you do with metrics. No need to reinvent the wheel here! So let's just do it.


loki.write "local" {
endpoint {
url = "http://loki.loki:3100/loki/api/v1/push"
}
}

discovery.kubernetes "pods" {
role = "pod"
}

discovery.relabel "logs" {
targets = discovery.kubernetes.pods.targets

rule {
source_labels = ["__meta_kubernetes_namespace"]
target_label = "namespace"
}

rule {
source_labels = ["__meta_kubernetes_pod_container_name"]
target_label = "container"
}

rule {
source_labels = ["__meta_kubernetes_pod_name"]
target_label = "pod"
}

rule {
source_labels = ["__meta_kubernetes_pod_node_name"]
action = "keep"
regex = env("HOSTNAME")
}

rule {
source_labels = ["__meta_kubernetes_namespace"]
action = "drop"
regex = "grafana-agent"
}

rule {
source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
target_label = "__path__"
separator = "/"
replacement = "/var/log/pods/*$1/*.log"
}
}

local.file_match "logs" {
path_targets = discovery.relabel.logs.output
}

loki.source.file "pods" {
targets = local.file_match.logs.targets
forward_to = [loki.process.logs.receiver]
}

loki.process "logs" {
stage.cri {}
forward_to = [loki.write.local.receiver]
}

What did we do in this snippet? Discovery.kubernetes.pods gets the list of pods, discovery.relabel.logs identifies the relevant pods and transforms their names into a directory of log files using a glob pattern. Then, local.file_match.logs retrieves the specific file names, which are subsequently scraped by loki.source.file.pods. Finally, we parse CRI logs within the loki.process.logs component.

Kubernetes events

Last but not least, I’ve also swapped out the event exporter. This component comes in handy when you’re trying to piece together what happened in the namespace or cluster a while back. However, Grafana Agent can handle this too, so there’s no need for a separate component.

      loki.source.kubernetes_events "events" {
log_format = "json"
forward_to = [loki.write.local.receiver]
}

This specific configuration fetches events from all namespaces, but you can also filter for the namespace you’re interested in. Be sure to check out the documentation for more details.

This is what you get

Putting all the stuff together

I’ve gone through all the steps using the official Helm chart, so now I’m going to share the values with you. With these, you should be able to set up basic metrics and logs collection pretty quickly.

agent:
mounts:
varlog: true

mode: 'flow'
configMap:
create: true
content: |
prometheus.remote_write "default" {
endpoint {
url = ""
}
}

logging {
level = "info"
format = "logfmt"
}

// discovery rules
discovery.kubernetes "pods" {
role = "pod"
}

discovery.kubernetes "services" {
role = "service"
}

discovery.kubernetes "endpoints" {
role = "endpoints"
}

discovery.kubernetes "endpointslices" {
role = "endpointslice"
}

discovery.kubernetes "ingresses" {
role = "ingress"
}

discovery.kubernetes "nodes" {
role = "node"
}

// servicemonitor
prometheus.operator.servicemonitors "services" {
forward_to = [prometheus.remote_write.default.receiver]
}

// cadvisor
prometheus.scrape "cadvisor" {
scheme = "https"
tls_config {
server_name = "kubernetes"
ca_file = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
insecure_skip_verify = false
}
bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
targets = discovery.relabel.metrics_cadvisor.output
scrape_interval = "60s"
forward_to = [prometheus.remote_write.default.receiver]
}


discovery.relabel "metrics_cadvisor" {
targets = discovery.kubernetes.nodes.targets

rule {
action = "replace"
target_label = "__address__"
replacement = "kubernetes.default.svc.cluster.local:443"
}

rule {
source_labels = ["__meta_kubernetes_node_name"]
regex = "(.+)"
action = "replace"
replacement = "/api/v1/nodes/${1}/proxy/metrics/cadvisor"
target_label = "__metrics_path__"
}
}

// kubelet
discovery.relabel "metrics_kubelet" {
targets = discovery.kubernetes.nodes.targets

rule {
action = "replace"
target_label = "__address__"
replacement = "kubernetes.default.svc.cluster.local:443"
}

rule {
source_labels = ["__meta_kubernetes_node_name"]
regex = "(.+)"
action = "replace"
replacement = "/api/v1/nodes/${1}/proxy/metrics"
target_label = "__metrics_path__"
}
}

prometheus.scrape "kubelet" {
scheme = "https"
tls_config {
server_name = "kubernetes"
ca_file = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
insecure_skip_verify = false
}
bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
targets = discovery.relabel.metrics_kubelet.output
scrape_interval = "60s"
forward_to = [prometheus.remote_write.default.receiver]
}

// logging
discovery.relabel "logs" {
targets = discovery.kubernetes.pods.targets

rule {
source_labels = ["__meta_kubernetes_namespace"]
target_label = "namespace"
}

rule {
source_labels = ["__meta_kubernetes_pod_container_name"]
target_label = "container"
}

rule {
source_labels = ["__meta_kubernetes_pod_name"]
target_label = "pod"
}

rule {
source_labels = ["__meta_kubernetes_pod_node_name"]
action = "keep"
regex = env("HOSTNAME")
}

rule {
source_labels = ["__meta_kubernetes_namespace"]
action = "drop"
regex = "grafana-agent"
}

rule {
source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
target_label = "__path__"
separator = "/"
replacement = "/var/log/pods/*$1/*.log"
}
}

local.file_match "logs" {
path_targets = discovery.relabel.logs.output
}

loki.source.file "pods" {
targets = local.file_match.logs.targets
forward_to = [loki.process.logs.receiver]
}

loki.process "logs" {
stage.cri {}
forward_to = [loki.write.local.receiver]
}

loki.source.kubernetes_events "events" {
log_format = "json"
forward_to = [loki.write.local.receiver]
}

loki.write "local" {
endpoint {
url = ""
}
}

What's next? You know the drill I guess.

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm upgrade --install grafana-agent grafana/grafana-agent -n grafana-agent --create-namespace --version 0.24.0 --values values.yaml

And that's it.

kubectl get pods -n grafana-agent
NAME READY STATUS RESTARTS AGE
grafana-agent-2bksd 2/2 Running 0 10h
grafana-agent-2mzts 2/2 Running 0 10h
grafana-agent-gbkfn 2/2 Running 0 10h

Wrap

At the end of the day, it turned out to be pretty straightforward. The only drawback was the lack of examples in the Grafana documentation. I actually found my solution through GitHub search. I searched for the “.river” extension, and that’s where I stumbled upon some valuable inspiration.

As a result, I was able to eliminate Promtail and Event-exporter. Another bonus is having a much more readable configuration.

So, if you’re exploring tools for metrics and logs collection, I highly recommend taking a closer look at Grafana Agent Flow. It’s incredibly powerful.

--

--