A Simple Prometheus client for httprouter

Aravindhan
Lynk Miles
Published in
3 min readSep 4, 2019

Prometheus is used for monitoring and alerts. Prometheus provides system and application monitoring which is essential for microservices architecture. Prometheus is also part of cloud native computing foundation (CNCF). Using Prometheus you can optimise monitor the microservice and optimise them. You can use the official Prometheus client for generating the metrics of the service. But it needs some tinkering and a lot of effort to track api level metrics that you want. By default it comes with a lot of metrics like number of goroutines, threads, processing, memory etc. But it does not come with metrics for a basic web application.

So I have written a package which collects the basic web server metrics. You can find the project here. It is very simple to use. Since I use httprouter a lot because of it’s high performance, I have started the package for httprouter first. After which I have added support for basic http handlers.

Httprouter

package main

import (
"net/http"

"github.com/FenixAra/go-prom/prom"
"github.com/julienschmidt/httprouter"
"github.com/FenixAra/go-util/log"
"github.com/prometheus/client_golang/prometheus/promhttp"
)

func main() {
router := httprouter.New()

router.GET("/metrics", Metrics(promhttp.Handler()))
// Tracking httprouter handles
router.GET("/ping", prom.Track(Ping, "Ping"))

http.ListenAndServe(":"+config.PORT, router)
}

func Metrics(h http.Handler) httprouter.Handle {
return func(w http.ResponseWriter, r *http.Request, ps httprouter.Params) {
h.ServeHTTP(w, r)
}
}

func Ping(w http.ResponseWriter, r *http.Request, _ httprouter.Params) int {
rd.w.Write([]byte("pong"))
return http.StatusOK
}
  1. Enable metrics tracking by exposing /metrics endpoint using the handler promhttp.Handler() provided by prometheus go client package.
  2. Track the Ping api by using router.GET(“/ping”, prom.Track(Ping, “Ping”)). prom.Track() function is used to track histogram of all api responses. It requires two parameters prom.Handler and the api name.
  3. Following filter parameters are added to the histogram.
  • status_class — Http status class of the api request. Possible values are 2xx, 3xx, 4xx and 5xx.
  • request — Name of the API request which was passed to prom.Track().
  • method — HTTP method of the API request like GET, POST, PUT, DELETE etc.

Grafana

Grafana is a very good visualization tool which integrates with Prometheus seamlessly. Just add a Prometheus data source and start creating all the graphs needed. I generally used RED observabilty method for monitoring the web servers. We monitor the following metrics:

  • R — Request throughput, Rate of requests
  • E — Rate of errors
  • D — Latency or response time of APIs

R — Request throughput

To track the request throughput for all endpoints I use the following query in grafana:

sum(rate(http_response_time_count{job="api-server"}[5m]))

To track the request rate per request:

sum(rate(http_response_time_count{job="api-server"}[5m])) by (request)

E — Error rate

To track the error rate for all endpoints use the following query:

sum(rate(http_response_time_count{status_class!~"2xx",job="api-server"}[5m]))

To track error rate per request:

sum(rate(http_response_time_count{status_class!~"2xx", job="api-server"}[5m])) by (request)

D — API Duration/Latency

To track the latency of all endpoints using the following query:

sum(rate(http_response_time_sum{job="api-server"}[5m])) /sum(rate(http_response_time_count{job="api-server"}[5m]))

To track latency per request:

sum(rate(http_response_time_sum{job="api-server"}[5m])) by (request) /sum(rate(http_response_time_count{job="api-server"}[5m])) by (request)

Originally published at https://www.fenixara.com on September 4, 2019.

--

--

Aravindhan
Lynk Miles

Aravindhan leads engineering at Radar Ventures. Passionate about microservices, Kubernetes, and containers. Avid gamer.