Better Slack alerts from Prometheus

Roman Vynar
The Quiq Blog
Published in
3 min readFeb 19, 2018

If you are using Prometheus and Alertmanager for monitoring and have the latter to send notifications to Slack, you probably do not like the default notification template which in most of the cases hides the alert details.

For example, let’s take a look:

As you can see, the default template groups by common labels and the description is only shown if it is common for all notifications which is rarely a case. Usually, when you have multiple notifications of different nature you just see this:

More alerts, less details unfortunately. How can we improve this?

There is an official documentation on customizing Alertmanager’s notification template and references to Golang templating. However, if you are not very familiar with this, you may find it hard and boring.

Here is one quick way of improving your alerts.

  1. Set title and text fields of slack_config of your Alertmanager’s config to use custom_title and custom_slack_message definitions as follow:
receivers:
- name: slack-channel
slack_configs:
- channel: #monitoring
icon_url: https://avatars3.githubusercontent.com/u/3380462
send_resolved: true
title: '{{ template "custom_title" . }}'
text: '{{ template "custom_slack_message" . }}'

Also add the following snippet in the end assuming Alertmanger can access this file:

templates:
- /alertmanager/notifications.tmpl

2. Create notifications.tmpl:

{{ define "__single_message_title" }}{{ range .Alerts.Firing }}{{ .Labels.alertname }} @ {{ .Annotations.identifier }}{{ end }}{{ range .Alerts.Resolved }}{{ .Labels.alertname }} @ {{ .Annotations.identifier }}{{ end }}{{ end }}{{ define "custom_title" }}[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ if or (and (eq (len .Alerts.Firing) 1) (eq (len .Alerts.Resolved) 0)) (and (eq (len .Alerts.Firing) 0) (eq (len .Alerts.Resolved) 1)) }}{{ template "__single_message_title" . }}{{ end }}{{ end }}{{ define "custom_slack_message" }}
{{ if or (and (eq (len .Alerts.Firing) 1) (eq (len .Alerts.Resolved) 0)) (and (eq (len .Alerts.Firing) 0) (eq (len .Alerts.Resolved) 1)) }}
{{ range .Alerts.Firing }}{{ .Annotations.description }}{{ end }}{{ range .Alerts.Resolved }}{{ .Annotations.description }}{{ end }}
{{ else }}
{{ if gt (len .Alerts.Firing) 0 }}
*Alerts Firing:*
{{ range .Alerts.Firing }}- {{ .Annotations.identifier }}: {{ .Annotations.description }}
{{ end }}{{ end }}
{{ if gt (len .Alerts.Resolved) 0 }}
*Alerts Resolved:*
{{ range .Alerts.Resolved }}- {{ .Annotations.identifier }}: {{ .Annotations.description }}
{{ end }}{{ end }}
{{ end }}
{{ end }}

3. Ensure each of your Prometheus alerts contains identifier and description tags in the annotations block, e.g.

- alert: instance_down
expr: up == 0
for: 5m
annotations:
identifier: '{{ $labels.instance }}'
description: '{{ $labels.job }} exporter job has been down for more than 5 minutes.'

Done. Restart Alertmanager and your Slack notifications will get better:

As you can see a single alert contains “alert name @ instance” in the notification title. In case of multiple alerts, the message body expands with details in a form of the list “- identifier: description” groupped by Resolved/Firing.

Also feel free to modify notifications.tmpl with your own format, use different annotations, e.g. instance label instead of identifier, etc.

Enjoy!

--

--