Monitor Icecast and Wowza listeners with dockerized InfluxDB, Grafana and Go
As part of my freelance activity I’m working for a streaming media service provider that offers streaming solutions for radio stations. As part of our infrastructure we mainly use Icecast-Kh and Wowza media servers. Some of them operate as master servers and more than 10 servers are load-balanced edges.
To provide our customers with statistics about how there channels perform and to obtain an overview of our infrastructure we already analyze log files that Icecast and Wowza produce (using ElasticSearch and Go but that’s another story). But there is one huge drawback of this set-up: Both, Icecast and Wowza write log entries after a listener finishes its session. That’s totally ok to get an impression of listeners over a certain time in the past. But it is anything but real-time.
So we came up with these requirements:
* collect data every few seconds to see the actual situation
* aggregate data for every host and every mount (channel)
* save data to be able to do historical reports
* visualize data in a nice and easy to use dashboard
* run everything as Docker containers
Some month ago InfluxDB caught my attention. InfluxDB is a time-series database specialized to store data like metrics and time-based events. There are competitors like OpenTSDB and Prometheus but I like InfluxDB’s easy HTTP API for writing data, its powerful query language, its great performance, the build-in admin UI and after all InfluxDB is written in Go — my preferred language. So I decided to build our new monitoring upon InfluxDB.
Although there are already some InfluxDB Docker containers I came up with my own that you can find on my Docker hub.
docker run -d --name=influxdb -v <data-dir>:/data:rw -p 8083:8083 -p 8086:8086 pteich/influxdb
This command mounts a local directory of the host as data directory into the container and binds ports 8083 (admin UI) and 8086 (HTTP API) to the hosts network. If the provided data directory is empty, it creates a new InfluxDB admin user named docker and generates a random password. You can check out this password using docker logs influxdb.
If everything’s up and running you can connect to your hosts port 8083 and connect to your database with the just created credentials:
You can use the Query Templates button to create a new database and user. If you are more into command line interfaces use the interactive shell that’s part of InfluxDB and also partof my container. Assuming you started InfluxDB with the command above use this:
docker run -it --link influxdb:influxdb --rm pteich/influxdb /opt/influxdb/influx -host=influxdb
Key concepts of InfluxDB are measurements (in my case: listeners or response), tags (here host and mount with there associated values) and the actual values. Measurements act as containers (or problably tables in classical databases) that store values and marks them with tags (more accurate tag-key and tag-value).
Visualize Data with Grafana
Grafana is a great tool to create elegant and amazing visualizations and dashboards. In addition to InfluxDB it supports several other data sources like ElasticSearch or Prometheus that even can be mixed all together.
I use the official Grafana Docker image for my setup and mount a local host directory to persist Grafana’s settings database:
docker run -d --name grafana -p 3000:3000 -v <data-dir>:/var/lib/grafana grafana/grafana
After opening Grafana on port 3000 in a browser window, it’s now time to add our running InfluxDB as a data source:
Collect data from Icecast
As stated above, it is not possible to gather the desired data using log files. But Icecast offers an admin interface which allows us to request statistics using HTTP calls with basic auth. For my needs I can use /admin/listmounts and /admin/stats. I created a Go service that connects to these endpoints on all our hosts and sends this data to InfluxDB. As a side effect I can measure how long it takes to connect to each Icecast host as an additional metric. Because InfluxDB’s line protocol for writing data is so easy, I can send all data directly using Go’s build-in HTTP client library.
This Endpoint provides a XML document with a listener count per mount that looks somewhat like this:
My Go aggregator calls /admin/listmounts every few seconds for all of our hosts simultaneously. It then calculates a total listener count for every host and a total listener count for every mount across all hosts. (Notice: It’s not necessary to do this at this point because the same could be achieved using InfluxDB’s continuous queries but I need these values anyway for other services that have direct access to my data collector.)
This endpoint provides a significantly larger XML document that contains a lot of information for every mount.
<title>Artist - Title</title>
Because the retrieval of this really huge document (above is the excerpt for only one mount) takes some time on hosts with loads of mounts, I only query it every minute. But nevertheless it contains one very interesting information: a timestamp for the stream start for each mount (XML node stream_start). This timestamp helps to detect short disconnects or general problems with stream sources that toggle. An added bonus is the meta-data that this XML contains (title node). I use this to make this data accessible for other services of our infrastructure.
Collect Data from Wowza Media Server
Wowza provides a similar HTTP API to query current listeners. This time the endpoint is /connectioncounts.xml but unlike Icecast it uses HTTP digest authentication.
The interesting value here is SessionsTotal that is specified for each stream.
Enrich the Dashboard with Annotations
One great Grafana feature are annotations. They provide a way to mark specific points on the graph across all visualitions. I use annotions to show connection errors (every error that occurs in my Go aggregator is send to InfluxDB) but also events coming from other sources e.g. other services of our infrastructure that log to ElasticSearch.
Over the last month this set-up works really great in production and performs very well. It helped us to get a whole new real-time view at our infrastructure and to recognize problems before they become serious.