Michael Habib
Nov 28, 2018 · 8 min read

InfluxDB is a database specifically designed to capture time series data. Using it along with Telegraf gives developers the ability to get data into InfluxDB from a fairly large list of sources. Telegraf is a plugin based agent used for collecting metrics and event data. You simply define your input and output plugins and Telegraf is able to be act as a bridge between the two.

Getting Started

We will be going through setting up Telegraf, InfluxDB, and Grafana as well as our own server. We will use that server to send data to InfluxDB through Telegraf and display it in a Grafana dashboard. Even though the server will be written in Go, installing the language on your system isn’t required. To follow along only Docker and Docker Compose will need to be installed. Instructions to install each can be found using these links:

All the code can be found and cloned from GitHub.

Creating the Server

Out of the box, Telegraf supports many input and output plugins. InfluxDB will be our output plugin because of its integration with Grafana. We will also use Telegraf’s HTTP input plugin which sends HTTP requests to endpoints you define. To keep things simple our server will only have one endpoint that will return the metrics that we need. To start off create a directory and add a file main.go with the code below.

package mainimport (
"encoding/json"
"fmt"
"log"
"net/http"
"time"
)
type SystemStatus uintconst (
RUNNING SystemStatus = 1
FAILING SystemStatus = 0
)
// MetricsResponse is the response that will be sent back to Telegraf
type MetricsResponse struct {
SystemStatus SystemStatus `json:"systemStatus"`
TaskStatus SystemStatus `json:"taskStatus"`
TaskID int `json:"taskID"`
SystemName string `json:"systemName"`
}
// MockSystem to query metrics from
type MockSystem struct {
Name string `json:"name"`
Tasks []Task `json:"tasks"`
}
type Task struct {
ID int
}
var mockSystems []MockSystemfunc fetchMetrics(w http.ResponseWriter, r *http.Request) {
metrics := []MetricsResponse{}
// goes through each mock system and collects whether it
// and its tasks are running
for _, mockSystem := range mockSystems {
systemStatus := getStatus()
for _, task := range mockSystem.Tasks {
taskStatus := getStatus()
metrics = append(metrics, MetricsResponse{
SystemName: mockSystem.Name,
SystemStatus: systemStatus,
TaskID: task.ID,
TaskStatus: taskStatus,
})
}
}
// create and send json response
resp, err := json.Marshal(metrics)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
w.Write([]byte{})
}
log.Printf("200 GET /internal/metrics")
w.Write(resp)
}
// simple way to get different statuses
func getStatus() SystemStatus {
if time.Now().UnixNano()%2 == 0 {
return RUNNING
}
return FAILING
}
func main() {
http.HandleFunc("/internal/metrics", fetchMetrics)
log.Fatal("Err:", http.ListenAndServe(":8010", nil))
}
func init() {
// used to create 3 mock systems, each with two tasks
for i := 0; i < 3; i++ {
mockSystems = append(mockSystems, MockSystem{
Name: fmt.Sprintf("system-%d", i+1),
Tasks: []Task{
Task{ID: 0},
Task{ID: 1},
},
})
}
}

In main we are creating a handler that will listen on the endpoint internal/metrics on port 8010. When a GET request is received the handler function retrieves the data from the mock systems and sends a JSON response back.

The next step will be to create the Dockerfile for this program. All it will do is build and set the execution point for our server. Along side main.go create a Dockerfile with the code below.

FROM golang:alpine as builder# Build Binary
RUN apk update && apk add git
COPY . $GOPATH/src/app
WORKDIR $GOPATH/src/app
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o server main.go
# Execute Binary
FROM alpine:3.7
WORKDIR /root
COPY --from=builder /go/src/app/server .
EXPOSE 8010
CMD ["./server"]

To make sure the server is returning what we expect lets try running it and sending a request. Follow these steps to ensure everything is working as expected.

  1. Build the image: docker build -t server .
  2. Run the container: docker run -d --name server -p 8010:8010 server
  3. In your terminal send a request with: curl http://0.0.0.0:8010/internal/metrics

You should receive a response similar to the one below but with different status values because of the getStatus() function. To stop and remove the container, run docker stop server && docker rm server .

// Formatted for the post
[
{
"systemStatus": 0,
"taskStatus": 0,
"taskID": 0,
"systemName": "system-0"
},
{
"systemStatus": 0,
"taskStatus": 1,
"taskID": 1,
"systemName": "system-0"
},
{
"systemStatus": 0,
"taskStatus": 1,
"taskID": 0,
"systemName": "system-1"
},
{
"systemStatus": 0,
"taskStatus": 0,
"taskID": 1,
"systemName": "system-1"
},
{
"systemStatus": 0,
"taskStatus": 1,
"taskID": 0,
"systemName": "system-2"
},
{
"systemStatus": 0,
"taskStatus": 1,
"taskID": 1,
"systemName": "system-2"
}
]

Setting up the Telegraf Configuration

Our configuration will have two sections, an input plugin and output plugin definition. Create a file named telegraf.conf that looks like below.

# Configuration for sending metrics to InfluxDB
[[outputs.influxdb]]
urls = [
"http://influxdb:8086"
]
# The target database for metrics; will be created as needed.
database = "telegraf"
[[inputs.http]]
## One or more URLs from which to read formatted metrics
interval = "1m"
urls = [
"http://server:8010/internal/metrics",

]
#HTTP method
method = "GET"
timeout = "15s" data_format = "json"
name_override = "internal_app_metrics"
tag_keys = [
"systemName",
"taskID",
]

In the [[inputs.http]] section we are telling Telegraf that it should send a GET request every minute to http://server:8010/internal/metrics . Telegraf also allows us to define tags for the incoming data. Figuring out how you want your data to be represented will play a big part in deciding what kind of tag keys should be added. The [[outputs.influxdb]] section tells Telegraf where to send the data it gets from the input plugins. In this case it will send the data to influxdb:8086 inside a database called telegraf.

Running the Whole Stack with Docker Compose

To get everything running together we need to define a docker-compose.yml that describes all the parts of the stack. The compose file will need to include a service definition for our server, Telegraf, InfluxDB, and Grafana.

# start: docker-compose up
# tear down: docker-compose down
version: '3'
services:
server:
build:
context: .
ports:
- "8010:8010"
influxdb:
image: influxdb
ports:
- "8083:8083"
- "8086:8086"
- "8090:8090"
# volumes:
# - "/path/you/define:/var/lib/influxdb"
telegraf:
image: telegraf:1.8
container_name: telegraf
links:
- influxdb
volumes:
- ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
depends_on:
- server
grafana:
image: grafana/grafana
ports:
- "3000:3000"
# volumes:
# - "/path/you/define:/var/lib/grafana"

To start all the services run docker-compose up in the same directory as the docker-compose.yml , the output should look like the picture below. Keep in mind that whenever the stack is brought down and back up again, the data and work you have done will not be saved. To persist any work done in InfluxDB and Grafana uncomment the volume definitions and define your own path to mount.

docker-compose up

To make sure everything is working we can use docker to check on our services. In another terminal window navigate to where the docker-compose.yml file is defined and run docker-compose ps . All the services should have a state of ‘Up’ and none of them should have indicated an exit. We can also check that our server is logging out what we expect with docker-compose logs server .

docker-compose ps
docker-compose logs server

Creating a Grafana Dashboard

In order to get onto the Grafana UI we need to figure out where on our machine it’s running. To do that we first need to find out its container ID by running docker ps . With the container ID, running docker inspect <container-id> | grep IP will output something similar to the snippet below. Take note of the value for the field: IPAddress in your terminal.

"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"IPAMConfig": null,
"IPAddress": "172.22.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,

In a web browser navigate to http://<IP-Address>:3000 . The Grafana login screen should open and request a username and password. By default the username and password is admin. Once logged in add a new data source by selecting ‘Add datasource’.

Grafana Home

Fill out the fields to match the picture below and click ‘Save & Test’ to make sure Grafana can connect to our InfluxDB database.

Add datasource

After adding the data source we can move on to adding a dashboard. On the left hand side select the + then import. The JSON that we need for this dashboard can be found in the repo. After loading the JSON, select the datasource to be the influxdb source we created earlier and select import. The dashboard should look similar to the one below. The dashboard that we imported contains the system status panel and the task status panel. Both of theses panels have a y-axis that represents the system/task status and an x-axis of time. The main difference between the two is by which tags the data is being grouped by.

System Status Panel

To edit the panels, click on the bar with the panel name and select edit from the popup.

These panels let us see first hand how tagging our data can provide the context to give the values meaning. It would not be too helpful if we only knew that a system was failing but had no idea on which. Being able to build our own responses and provide the context with tags is what makes the HTTP Telegraf plugin so valuable. To read more about Telegraf, InfluxDB, and Grafana checkout out their docs.

With Docker Compose cleaning up our whole stack is very simple. In a terminal window navigate to where the docker-compose.yml file is defined and run docker-compose down . Behind the scenes this stops and removes all the containers that were brought up by docker-compose up . 🚀


I hope you found this post useful and like always please feel free to comment, ask questions, or suggest a topic I can write about next.

P.S. Follow me on Twitter

Michael Habib

Written by

Software Engineer @ Pixability

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade