Publishing Photon events to Telegraf and monitoring them with Grafana

Daz Wilkin
12 min readJul 17, 2016

--

Phew! The title may be longer than the post! Let’s get started.

Grafana monitoring data from 3 Photon devices

Summary

I am intrigued by and bought a couple off Photon devices from particle.io. These then sat on my desk while I imagined scenarios that would use them. I’m documenting this, my first end-to-end solution, because it may help others decide upon applications for Photons or provide hints as to how to integrate them using other software.

Particle’s hardware, documentation and tools are mostly excellent and there’s a strong developer community. I followed the Particle ‘photoresistor’ tutorial. It shows how to wire a photoresistor to a Photon and send values from the photoresistor to the Particle cloud.

I extended the tutorial to publish the events with a Webhook to a Telegraf agent that records the events in an InfluxDB time-series database. Finally, I use Grafana to query the InfluxDB database and display near real-time graphs of the events.

In this post, I document how to deploy Telegraf, InfluxDB and Grafana as Docker containers on a VM running on Google Cloud Platform. There’s a 60-day free trial for Google Cloud Platform if you’d like to try it too.

1. Publish Values from Photon(s)

Please follow-along with the Particle tutorial for configuring a Photo to read values from a photoresistor.

A simple Photon program to read the photoresistor and publish the value as an event called ‘photoresistor’ every 10 seconds (10000ms) is:

int photoresistor = A0;
int power = A5;
int analogvalue;
void setup() {
pinMode(photoresistor,INPUT);
pinMode(power,OUTPUT);
digitalWrite(power,HIGH);
}
void loop() {
analogvalue = analogRead(photoresistor);
Particle.publish("photoresistor", String(analogvalue), PRIVATE);
delay(10000);
}

An even simpler solution that simple generates random numbers and does not require the photoresistor to be wired into the Photon breadboard is:

void loop() {
Particle.publish("photoresistor", String(random(1000)), PRIVATE);
delay(10000);
}

If you use Particle Build, run “Verify” to check your code and then “Flash” the code to one (or more) of your devices.

You may use the Particle Dashboard to check that the events are being published to the Particle Cloud. You should see events being streamed like this:

Particle Dashboard — Logs

Optional: ThingSpeak

ThingSpeakTM is an Internet of Things platforms that lets you collect and store sensor data in the cloud. It is owned by MathWorks(R) and enables analysis and visualization of the data using MATLAB(R)

You may create an account for free on ThingSpeak and then publish the events from the Photons to ThingSpeak.

Particle’s documentation explains how to configure a Particle Integration and ThingSpeak as part of one of its tutorial: Your first webhook

Alternatively

If you don’t have a Photon or don’t wish to complete this step, you will need an alternative way to be able to generate these events and simulate the Webhook calls. The simplest way to do this is to use curl or an equivalent. I will show you how to do this later in the tutorial. Assuming the Telegraf Webhook plugin is running on MY-VM, the command will be similar to:

curl \
--write-out "%{http_code}" \
--data "event=photoresistor" \
--data "data=825" \
--data "published_at=2016–12–31T23%3A59%3A59Z" \
--data "coreid=123456789abcdef123456789" \
http://MY-VM:1619/particle

2. Run Telegraf, InfluxDB and Grafana

Hopefully, you now have one (or more) Photon devices publishing ‘photoresistor’ events to the Particle cloud (and possibly to ThingSpeak). This next step runs a Telegraf monitoring agent, configured as a Webhook server to receive the events from the Photon(s). Rather than simply discard, the data, Telegraf will store the data in InfluxDB, a time-series database. InfluxDB provides a convenient UI to query the event data but we’re going to use Grafana to visualize the data near real-time.

Grafana UI — Monitoring “photoresistor”

The Telegraf agent must run on an Internet-accessible host machine in order for Particle to send the events to it. I provide instructions for you to use a virtual machine on Google Cloud Platform (Compute Engine) for this but you may use any machine or another cloud provider as long as the machine has an Internet-accessible endpoint.

Optional: Create a VM on Compute Engine

You will need a Google account (e.g. gmail) and have created a project (I’ll use ‘[[PROJECT-ID]]’ as a placeholder for your project’s ID, please replace ‘[[PROJECT-ID]]’ with your actual project ID) in Google Cloud Platform. Google provides a free trial of Google Cloud Platform.

Ensure that you have enabled the Compute Engine service:

https://console.cloud.google.com/apis/api/compute_component/overview?project=[[PROJECT-ID]]

And that billing is configured:

https://console.cloud.google.com/billing/unbilledinvoice?project=[[PROJECT-ID]]

From a Linux terminal, run the following commands to create a VM, 3 firewall rules (Telegraf, InfluxDB and Grafana), SSH you into the instance.. Replace ‘[[MY-VM]]’ with your preferred and valid name for the VM. Replace ‘[[PROJECT-ID]]’. You may use any Compute Engine zone, “us-east1-d” will work.

INSTANCE="[[MY-VM]]"
PROJECT="[[PROJECT-ID]]"
ZONE="us-east1-d"
gcloud compute instances create $INSTANCE \
--custom-cpu=1 \
--custom-memory=2 \
--project=$PROJECT \
--zone=$ZONE \
--image-family=gci-stable \
--image-project=google-containers
gcloud compute firewall-rules create \
--project=$PROJECT \
--allow=tcp:8083,tcp:8086 \
--network=default \
--source-ranges=0.0.0.0/0 \
allow-influxdb
gcloud compute firewall-rules create \
--project=$PROJECT \
--allow=tcp:1619 \
--network=default \
--source-ranges=0.0.0.0/0 \
allow-telegraf
gcloud compute firewall-rules create \
--project=$PROJECT \
--allow=tcp:3000 \
--network=default \
--source-ranges=0.0.0.0/0 \
allow-grafana
gcloud compute ssh $INSTANCE \
--project=$PROJECT

All being well, your terminal should show you logged in “[[MY-VM]]” and, if you visit the Networking page of Google Cloud Console for your project ID, you should see:

https://console.cloud.google.com/networking/firewalls/list?project=[[PROJECT-ID]]
Google Cloud Platform — Firewall Rules

And your VM should display similar to this if you visit the Compute Engine page of Google Cloud Console for your project ID. NB I’ve edited the machine name for clarity but you may not use the name [[MY-VM]] for your VM:

https://console.cloud.google.com/compute/instances?project=[[PROJECT-ID]]
Google Cloud Platform — VM Instances

Optional: Use port-forwarding to access InfluxDB and Grafana

In order to not have to open additional ports on the firewall so that you may access the InfluxDB and Grafana UIs, you may use this form of the ‘gcloud compute ssh’ command to port-forward the InfluxDB (8083, 8086) and Grafana (3000) UIs to your localhost.

gcloud compute ssh $INSTANCE \
--project=$PROJECT \
--ssh-flag="-L 3000:localhost:3000" \
--ssh-flag="-L 8083:localhost:8083" \
--ssh-flag="-L 8086:localhost:8086"

NB Because we have not yet run the Telegraf, InfluxDB and Grafana servers, there are no UIs to access. BUT once we complete that step, you should be able to access the InfluxDB UI:

http://localhost:8083
http://localhost:3000

Run Telegraf agent configured with Particle Webhook Plugin

The GitHub project for the Telegraf Particle Webhook Plugin is here:

https://github.com/DazWilkin/telegraf-particle-webhook-plugin

if you wish, you may follow the instructions on on that repository to download and configure the Plugin for yourself.

Optional: Use Google Container Registry

If you decide to build and deploy the Telegraf agent from the GitHub repository and you’re using Google Cloud Platform,, you may choose to push the image to Google Container Registry instead of Docker Hub as this will be faster to pull.

After you have built the plugin on your local machine and called it ‘telegraf-particle-webhook-plugin’, you can tag then push it to your project’s container registry with this command:

docker tag \
telegraf-particle-webhook-plugin \
gcr.io/[[PROJECT-ID]]/telegraf-particle-webhook-plugin
gcloud docker push \
gcr.io/[[PROJECT-ID]]/telegraf-particle-webhook-plugin

Alternatively, I’ve pushed a copy to Docker Hub that you may use. This will be the version referenced subsequently. You may replace the ‘dazwilkin/telegraf’ references with your gcr.io image.

Optional: Add $USER to the Docker group

If you find that you need to sudo each Docker command, you may wish to use the following commands to add your user ($USER) to the Docker group. Then you will be able to type Docker commands without prefixing with sudo:

sudo usermod -a -G docker ${USER}
exec sudo su ${USER}

Run Containers

While on [[MY-VM]], we will now create a docker network to host our containers, then run the InfluxDB container, then the Telegraf and Grafana containers. The commands to do this are:

NETWORK="influxdata"docker network create $NETWORKdocker run -d \
--name=influxdb \
--net=$NETWORK \
--publish=8083:8083 \
--publish=8086:8086 \
influxdb
docker run -d \
--name=telegraf \
--net=$NETWORK \
--publish=1619:1619 \
dazwilkin/telegraf-particle-webhook-plugin:ubuntu
docker run -d \
--name=grafana \
--net=$NETWORK \
--publish=3000:3000 \
grafana/grafana

If you run:

docker ps --format="{{.Names}}"

You should see:

telegraf
grafana
influxdb

And, to check that Telegraf and InfluxDB are working and communicating:

docker logs influxdb

You should see:

8888888           .d888 888                   8888888b.  888888b.
888 d88P" 888 888 "Y88b 888 "88b
888 888 888 888 888 888 .88P
888 88888b. 888888 888 888 888 888 888 888 888 8888888K.
888 888 "88b 888 888 888 888 Y8bd8P' 888 888 888 "Y88b
888 888 888 888 888 888 888 X88K 888 888 888 888
888 888 888 888 888 Y88b 888 .d8""8b. 888 .d88P 888 d88P
8888888 888 888 888 888 "Y88888 888 888 8888888P" 8888888P"
[run] 2016/07/17 17:42:18 InfluxDB starting, version 0.13.0, branch 0.13, commit e57fb88a051ee40fd9277094345fbd47bb4783ce
[run] 2016/07/17 17:42:18 Go version go1.6.2, GOMAXPROCS set to 1
[run] 2016/07/17 17:42:18 Using configuration at: /etc/influxdb/influxdb.conf
[store] 2016/07/17 17:42:18 Using data dir: /var/lib/influxdb/data
[subscriber] 2016/07/17 17:42:18 opened service
[monitor] 2016/07/17 17:42:18 Starting monitor system
[monitor] 2016/07/17 17:42:18 'build' registered for diagnostics monitoring
[monitor] 2016/07/17 17:42:18 'runtime' registered for diagnostics monitoring
[monitor] 2016/07/17 17:42:18 'network' registered for diagnostics monitoring
[monitor] 2016/07/17 17:42:18 'system' registered for diagnostics monitoring
[cluster] 2016/07/17 17:42:18 Starting cluster service
[shard-precreation] 2016/07/17 17:42:18 Starting precreation service with check interval of 10m0s, advance period of 30m0s
[snapshot] 2016/07/17 17:42:18 Starting snapshot service
[copier] 2016/07/17 17:42:18 Starting copier service
[admin] 2016/07/17 17:42:18 Starting admin service
[admin] 2016/07/17 17:42:18 Listening on HTTP: [::]:8083
[continuous_querier] 2016/07/17 17:42:18 Starting continuous query service
[httpd] 2016/07/17 17:42:18 Starting HTTP service
[httpd] 2016/07/17 17:42:18 Authentication enabled: false
[httpd] 2016/07/17 17:42:18 Listening on HTTP: [::]:8086
[retention] 2016/07/17 17:42:18 Starting retention policy enforcement service with check interval of 30m0s
[run] 2016/07/17 17:42:18 Listening for signals

Then try:

docker logs telegraf

You should see:

2016/07/17 17:43:24 Starting Telegraf (version 1.0.0-beta2-22-ge1c3800)
2016/07/17 17:43:24 Loaded outputs: influxdb
2016/07/17 17:43:24 Loaded inputs: cpu diskio kernel mem processes webhooks disk swap system
2016/07/17 17:43:24 Tags enabled: host=5224e206404c
2016/07/17 17:43:24 Agent Config: Interval:10s, Debug:false, Quiet:false, Hostname:"5224e206404c", Flush Interval:10s
2016/07/17 17:43:24 Started the webhooks service on :1619
2016/07/17 17:43:24 Started 'particle' on /particle

NB The last 2 lines here show that the webhook service started (on port 1619) and it is expecting events from Particle on “/particle”.

You may test this now using curl. Assuming you’re on the host where the containers are running:

curl \
--write-out "%{http_code}" \
--data "event=photoresistor" \
--data "data=825" \
--data "published_at=2016-12-31T23%3A59%3A59Z" \
--data "coreid=123456789abcdef123456789" \
http://localhost:1619/particle

If you’re not accessing the containers on the same host, replace ‘localhost’ with ‘[[MY-VM]]’. All being well, the server should return ‘OK’:

200

Access the UIs

At this point, you should be able to access the InfluxDB (:8083) and Grafana (:3000) UIs. Unfortunately, until we configure the Particle Webhook to send data, there’s not very much for you to see. If you used port-forwarding as explained above, you may use “localhost”. Otherwise, you will need to replace localhost in the following links with [[MY-VM]]:

http://localhost:8083
InfluxDB UI “show measurements”

I’ve selected “telegraf” from the “Database” dropdown in the upper right-hand corner and then typed “show measurements” in the Query box.

Once some data has been sent to the /particle endpoint, a measurement named “particle” should be created in InfluxDB. You can confirm this by requerying:

show measurements

Should return a list that includes ‘particle’:

InfluxDB UI measurements including ‘particle’

and, this query should return some data:

select “data” from “particle”

You may also call the InfluxDB query endpoint directly with the following URL:

http://localhost:8086/query?q=select+%22data%22+from+%22particle%22&db=telegraf

And the results should look similar to this:

{
"results": [{
"series": [{
"name": "particle",
"columns": ["time", "data"],
"values": [
["2016-12-31T23:59:59Z", 999],
["2016-12-31T23:59:59Z", 999]
]
}]
}]
}

NB Remember to do this from within the ‘telegraf’ database. If you refresh the InfluxDB UI, the database will revert to ‘_internal’.

http://localhost:3000
Grafana UI — User == Password == ‘admin’

The Grafana default User and Password are ‘admin’

I’ve switched Grafana to its light them to make it easier to read in the post, your screen. You should add the InfluxDB database as a data source to Grafana. In the upper left-hand corner, click the Grafana icon, then “Data Sources” then “add data source”. After you click “Add”, you should receive a “Success” response and the data source is added. Ensure you select “Default”. The InfluxDB default User and Password are both “root”

Grafana UI — Add ‘InfluxDB’ Data Source

Then to create a dashboard, click the Telegraf icon, “Dashboard”, then “New”. There’s a subtle green menu on the left hand side of the Dashboard screen, click it, then “Add Panel” then “Add Graph”

Grafana UI — Add Graph

You will be presented with the following screen to configure a graph for your measurement. You can click each of the boxes in this editor to select from available values. You’d like the result to look like this:

Grafana UI — Edit Metric

If you have submitted data, you should see the graph of the data appear immediately.

3. Create an Integration & Monitor the results

The Photon(s) are configured and publishing events to Particle Cloud. The Telegraf Plugin is running, tested and sending data to InfluxDB from where we’re able to monitor the results using Grafana. The last step is to configure Particle to send the published events to the Telegraf Webhook endpoint.

Access the “Integrations” section of Particle Dashboard

https://dashboard.particle.io/user/integrations
Particle Dashboard — Integrations

Click “New Integration”, then select “Webhook” then complete the form replacing [[MY-VM]] with the hostname|IP address of the host running the Telegraf Plugin:

Particle Dashboard — New Webhook

Click “Create Webhook”. The response should look like this:

Particle Dashboard — View Webhooks

Access the “Logs” section of the Particle Dashboard:

https://dashboard.particle.io/user/logs

You should see events streaming from your Photon(s) and being “hook-sent/photoresistor”. Ignore the “undefined” value with these entries. This appears to be a minor bug in the Particle Dashboard.

Particle Dashboard — Logs

Lastly, we can observe the data in Grafana:

Grafana UI

4. Tear-down

You may wish to delete the Particle Integration. Visit the “Integrations” section of the Particle Dashboard, click your Webhook, scroll to the bottom of the page and click “Delete Webhook”

https://dashboard.particle.io/user/integrations

From the host on which you deployed the Telegraf, InfluxDB and Grafana containers, these commands will stop-remove the containers and delete the network:

NETWORK="influxdata"
for CONTAINER in telegraf influxdb grafana;
do
docker stop $CONTAINER | xargs docker rm
done
docker network rm $NETWORK

If you used Compute Engine to create a VM to host the containers, you may simply delete the project to tidy-up. Alternatively, these commands will delete the VM and the firewall rules:

PROJECT="[[PROJECT-
INSTANCE="[[MY-VM]]"
gcloud compute instances delete $INSTANCE \
--project=$PROJECT
--quiet
gcloud compute firewall-rules delete \
allow-telegraf \
allow-grafana \
allow-influxdb \
--project=$PROJECT
--quiet

Conclusion

In this tutorial, you have:

  1. Flashed an application to Particle Photon device(s) that publishes ‘photoresistor’ events to the cloud.
  2. Configured a Particle Integration (Webhook) to send the published events to a Telegraf agent running on your host.
  3. Deployed Telegraf and configured it to store events in InfluxDB
  4. Deployed Grafana and configured it to monitor the events stored in InfluxDB
  5. Monitored the events using the Particle Dashboard, querying InfluxDB and using Grafana.

References

https://cloud.google.com/
https://cloud.google.com/container-registry/
https://cloud.google.com/free-trial/
https://build.particle.io
https://dashboard.particle.io
https://curl.haxx.se/
https://docker.com/
http://grafana.org/
https://influxdata.com/time-series-platform/influxdb/
https://influxdata.com/time-series-platform/telegraf/
https://thingspeak.com
https://www.particle.io/prototype#photon

--

--