Autoscaling with Keda and Prometheus Using Custom Metrics in Go

Emirhan Doğandemir
VakıfBank Teknoloji
5 min readJul 22, 2024
goprometheuskeda

Goals

  • Demonstrate how to create custom Prometheus metrics in a Go application.
  • Provide steps to deploy the containerized application on Kubernetes
  • Illustrate how to configure Prometheus to scrape custom metrics.
  • Describe the process of integrating Keda with Prometheus for autoscaling
  • Create a scenario to scale pods based on the number of requests using Keda

Requirements

  1. Docker
  2. Go
  3. Prometheus
  4. Keda
  5. Kubernetes
  6. Helm

What is Prometheus ?

Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. It is now a standalone open-source project and maintained independently of any company. Prometheus scrapes and stores metrics as time series data, recording information with a timestamp, optional key-value pairs called labels, and a name.

Custom Prometheus Metrics in Golang


package main

import (
"log"
"net/http"
"time"

"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)

var (
HttpRequestCountWithPath = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total_with_path",
Help: "Number of HTTP requests by path.",
},
[]string{"url"},
)

HttpRequestDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "http_request_duration_seconds",
Help: "Response time of HTTP request.",
},
[]string{"path"},
)

orderBooksCounter = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "product_order_total",
Help: "Total number of product",
},
)
)

func init() {
prometheus.MustRegister(orderBooksCounter)
prometheus.MustRegister(HttpRequestCountWithPath)
prometheus.MustRegister(HttpRequestDuration)
}

func main() {
http.HandleFunc("/product", orderHandler)
http.Handle("/metrics", promhttp.Handler())

log.Println("Starting server on :8181")
log.Fatal(http.ListenAndServe(":8181", nil))
}

func orderHandler(w http.ResponseWriter, r *http.Request) {
start := time.Now()
orderBooksCounter.Inc()
HttpRequestCountWithPath.WithLabelValues(r.URL.Path).Inc()

w.Write([]byte("Order placed!"))

duration := time.Since(start).Seconds()
HttpRequestDuration.WithLabelValues(r.URL.Path).Observe(duration)
}
FROM golang:1.22-alpine

WORKDIR /basics

COPY . .

RUN go build -o main .

CMD ["./main"]

You can find the above code snippet and with a Dockerfile sufficient to containerize it.

Build ,run and push the Docker container using the following commands:

docker build -t dogandemir51/goprometheuskeda:v2 .

docker run -d -p 8181:8181 --name goprometheuskeda dogandemir51/goprometheuskeda:v2

docker push dogandemir51/goprometheuskeda:v2

After running the application as a container, we get the following output:

Now, let’s make this setup usable on Kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
name: goprometheus-deployment
labels:
app: goprometheus
spec:
replicas: 3
selector:
matchLabels:
app: goprometheus
template:
metadata:
labels:
app: goprometheus
spec:
containers:
- name: goprometheus
image: dogandemir51/goprometheuskeda:v2
resources:
limits:
memory: "128Mi"
cpu: "500m"
requests:
memory: "64Mi"
cpu: "250m"

The above manifest defines a deployment with 3 replicas. The selector and labels ensure the correct association of pods, while resource limits and requests help manage resource usage.

apiVersion: v1
kind: Service
metadata:
name: goprometheus-service
spec:
selector:
app: goprometheus
ports:
- protocol: TCP
port: 8181
targetPort: 8181
type: ClusterIp

Using the service created above, we can access our application via port-forwarding and check the metrics production.

To enable Prometheus to scrape the metrics produced by our Go application in your Kubernetes cluster, you need to configure the Prometheus settings in the prometheus.yaml file. We will be making these changes in the values.yaml file of the deployed chart. First, we’ll retrieve the current values.yaml from the deployed chart, then add the extraScrapeConfigs section, and finally, upgrade the chart.1) helm get values prometheus -n prometheus --all > values.yaml

vim values.yaml

/extraScrapeConfigs => for searching on cli

2)
extraScrapeConfigs: |
- job_name: 'goprometheus'
scrape_interval: 15s
static_configs:
- targets: ['goprometheus-service.default.svc.cluster.local:8181']

3) helm upgrade --install prometheus prometheus-community/prometheus -f values.yaml -n prometheus

Verify the changes within the pod:

kubectl exec -it <pod-name> -n prometheus — /bin/sh

vi /etc/config/prometheus.yaml || /etc/prometheus/prometheus.yaml

From the Prometheus dashboard, navigate to Status -> Targets to see our service.

targets page

We see the result of our metrics by sending 6 requests to the /product endpoint. Now, let's create a scenario where we scale our pods based on the number of requests during a high-traffic period. This is where the power of Keda and Prometheus comes together.

Using Keda

The above configuration specifies that scaling should occur when the product_order_total metric exceeds 20. After applying the above YAML, let's check the number of replicas.

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: goprometheus-scaledobject
namespace: default
labels:
deploymentName: goprometheus-deployment
spec:
scaleTargetRef:
name: goprometheus-deployment
minReplicaCount: 1
maxReplicaCount: 3
triggers:
- type: prometheus
metadata:
serverAddress: http://prometheus-server.prometheus.svc.cluster.local:80
metricName: product_order_total
threshold: '20'
query: sum(product_order_total)

Here, I specified that scaling should occur when the product_order_total metric exceeds 20. Let's apply the above YAML and then check the number of replicas.

Next, I generate additional requests.

My application, which initially runs with one replica, scales up to three replicas. Let’s describe the HPA to see the details. The last line provides information about the scaling operation, explaining that the metric exceeded the target, hence triggering the scaling, and confirms that the operation was successful.

k describe hpa keda-hpa-goprometheus-scaledobject
Name: keda-hpa-goprometheus-scaledobject
Namespace: default
Labels: app.kubernetes.io/managed-by=keda-operator
app.kubernetes.io/name=keda-hpa-goprometheus-scaledobject
app.kubernetes.io/part-of=goprometheus-scaledobject
app.kubernetes.io/version=2.14.0
deploymentName=goprometheus-deployment
scaledobject.keda.sh/name=goprometheus-scaledobject
Annotations: <none>
CreationTimestamp: Fri, 19 Jul 2024 23:08:00 +0300
Reference: Deployment/goprometheus-deployment
Metrics: ( current / target )
"s0-prometheus" (target average value): 14667m / 20
Min replicas: 1
Max replicas: 3
Deployment pods: 3 current / 3 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from external metric s0-prometheus(&LabelSelector{MatchLabels:map[string]string{scaledobject.keda.sh/name: goprometheus-scaledobject,},MatchExpressions:[]LabelSelectorRequirement{},})
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 2m59s horizontal-pod-autoscaler New size: 3; reason: external metric s0-prometheus(&LabelSelector{MatchLabels:map[string]string{scaledobject.keda.sh/name: goprometheus-scaledobject,},MatchExpressions:[]LabelSelectorRequirement{},}) above target

Sources I benefited from:

Scaling Pods based on Prometheus Metrics using Keda | by Giulio Soares | Building Inventa | Medium

Unable to get additional scrape configs working with helm chart: prometheus-25.1.0 (app version v2.47.0) : r/PrometheusMonitoring (reddit.com)

Prometheus | KEDA

Thank you for reading until the end. Before you go:

Please remember to clap for this article and follow me! 👏 If you want to follow me, LinkedIn

--

--

Emirhan Doğandemir
VakıfBank Teknoloji

DevOps & Platform Engineer #Kubernetes #CI/CD #GitOps #Automation #DevSecOps