Creating your personal IoT/Utility Dashboard using Grafana, Influxdb & Telegraf on a Rasperry Pi.

Neelabh kumar
8 min readOct 5, 2019

--

In this article i’m going to explain to you how to setup a personal IoT & Utility dashboard.

Hardware Requirements:

  1. Raspberry pi 3 B+ or 4
  2. Sense Hat
  3. Monitor

Software Frameworks Used:

  1. Grafana
  2. InfluxDb
  3. Telegraf

Hardware Setup:

Setting up a Raspberry pi is fairly easy following the official guide. I won’t be going through that in this article. Once you have setup the pi, you need to connect the sense Hat to the pi. The Raspberry Pi Sense HAT is attached on top of the Raspberry Pi via the 40 GPIO pins. Please follow this guide, it shouldn’t take more than 5 minutes.

The Real Stuff:

The way you want to use the dashboard depends completely on you. All the code that I’m going to post here basically deal with the informations i want to collect and the way i want it to be displayed. You need to change a few things to run it on your machine, and you have the full right to change it as you please.

Logging various Metrics from sensors and Open weather API:

I’m using Python3 to log the room temperature and humidity values from the sensors. I’m also using Open Weather API to get the outside weather conditions.

import time 
import logging
import vcgencmd
from sense_hat import SenseHat
import requests
import pytemperature
sense = SenseHat()
logging.basicConfig(filename='temperature.log', filemode='a', format='%(created)f %(message)s', level=logging.INFO)
while True:
t = sense.get_temperature()
h = sense.get_humidity()
CPUc=vcgencmd.measure_temp()
r = requests.get('http://api.openweathermap.org/data/2.5/weather?id="Your Location id"&APPID="Your API Key"')
result=r.json()
outsideTemp=pytemperature.k2c(result["main"]["temp"])
outsideHumid=result["main"]["humidity"]
logging.info('Temp={0:0.1f} C and Humidity={1:0.1f}% and CPU_Temp={2:0.1f} and ot={3:0.1f} and oh={4:0.1f}%'.format(t, h, CPUc, outsideTemp, outsideHumid))
time.sleep(4)

You can use python3 to directly feed the data to InfluxDB, however logging is a much neater way to do it. As you can see in the code, i’m getting the room metrics using SenseHat library and fetching it from the sensors. For outside weather conditions i’m using the openweather API. Please follow the steps below to get your own API key and Location ID

  1. Create a free account on openweather [Link to account creation] (https://home.openweathermap.org/users/sign_up).
  2. Get your API key under the API keys tab [Link] (https://home.openweathermap.org/api_keys)
  3. Search for your city id here [Link to city id json file] (http://bulk.openweathermap.org/sample/)
  4. Replace {your city id} and {your api key} in the code with your city ID and API key.

You can notice that i’m using vcgencmd to get the CPU temp, there is a better way to do it which i’ll mention later in the article.

This should log the data in temerature.log file, try running it with python3 and check if the log file is getting generated. It will look something like this:

1570130669.852521 Temp=24.8 C and Humidity=64.1% and CPU_Temp=50.5 and ot=11.6 and oh=93.0%
1570130674.022393 Temp=24.9 C and Humidity=63.5% and CPU_Temp=50.5 and ot=11.6 and oh=93.0%
1570130678.148942 Temp=24.8 C and Humidity=64.2% and CPU_Temp=49.9 and ot=11.5 and oh=93.0%
1570130682.303456 Temp=25.0 C and Humidity=64.1% and CPU_Temp=50.5 and ot=11.6 and oh=93.0%Setting up InfluxDB and Telegraf:

InfluxDB Setup:

In this step i’ll show you how to setup InfluxDB and Telegraf. I’ll also be explaining a little bit about both in this section later.

wget -qO- https://repos.influxdata.com/influxdb.key | sudo tee /etc/apt/sources.list.d/influxdb.list test $VERSION_ID = "8" && echo "deb https://repos.influxdata.com/debian jessie stable" | sudo tee /etc/apt/sources.list.d/influxdb.list test $VERSION_ID = "9" && echo "deb https://repos.influxdata.com/debian stretch stable" | sudo tee /etc/apt/sources.list.d/influxdb.listsudo apt-get update && sudo apt-get install influxdb

Once the installation is done, you need to start the service using

sudo service influxdb start

Please check the status of Influx db using

sudo service influxdb status

You should get something like this if everything is alright.

● influxdb.service - InfluxDB is an open-source, distributed, time series database
Loaded: loaded (/lib/systemd/system/influxdb.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-10-03 23:05:15 EDT; 22h ago
Docs: https://docs.influxdata.com/influxdb/
Main PID: 15832 (influxd)
Tasks: 17 (limit: 2077)
Memory: 110.8M
CGroup: /system.slice/influxdb.service
└─15832 /usr/bin/influxd -config /etc/influxdb/influxdb.conf

Telegraf Setup:

Get the latest armhf version for you pi from here. Fetch the latest version using wget

wget https://dl.influxdata.com/telegraf/releases/telegraf_1.12.2-1_armhf.deb

Install.

sudo dpkg -i telegraf_1.12.2-1_armhf.deb

At this stage you should start the metric collection file in the background, it will start logging all the values in the temeprature.log file. I have used nohup, you can chose not to.

nohup python3 iotutil.py &

The idea behind using Telegraf and influxdb is to make the data collection and querying seamless.

InfluxDB is a high-performance data store written specifically for time series data. It allows for high throughput ingest, compression and real-time querying. You can see we are collecting data on the go, and as we go forward you will notice that we are querying it as it is getting in to the DB.

Telegraf makes the job of cleaning and feeding continuous data in to Influxdb seamless. I’m using a grok log parser with Telegraf to fetch meaningful data from the logs we just created. Writing grok-patterns for the first time can be really tricky, please refer to this pattern matcher if you wish to create your own custom pattern apart from what i have included in the code.

Telegraf is going to fetch the data from various inputs and feed it to the influxDB. You just have to define different inputs and one output, it is also going to create tables automatically. For all of this we need to write a conf file, name this file iotlog.conf and paste the content below.

[agent]
# Batch size of values that Telegraf sends to output plugins.
metric_batch_size = 1000
# Default data collection interval for inputs.
interval = "30s"
# Added degree of randomness in the collection interval.
collection_jitter = "5s"
# Send output every 5 seconds
flush_interval = "5s"
# Buffer size for failed writes.
metric_buffer_limit = 10000
# Run in quiet mode, i.e don't display anything on the console.
quiet = true
# # Ping given url(s) and return statistics
[[inputs.ping]]
## NOTE: this plugin forks the ping command. You may need to set capabilities
## via setcap cap_net_raw+p /bin/ping
#
## urls to ping
urls = ["www.github.com","www.amazon.com","1.1.1.1"]
## number of pings to send per collection (ping -c <COUNT>)
count = 3
## interval, in s, at which to ping. 0 == default (ping -i <PING_INTERVAL>)
ping_interval = 15.0
## per-ping timeout, in s. 0 == no timeout (ping -W <TIMEOUT>)
timeout = 10.0
## interface to send ping from (ping -I <INTERFACE>)
interface = "wlan0"
# Gather metrics about network interfaces
[[inputs.net]]
## By default, telegraf gathers stats from any up interface (excluding loopback)
## Setting interfaces will tell it to gather these explicit interfaces,
## regardless of status. When specifying an interface, glob-style
## patterns are also supported.
##
interfaces = ["wlan0"]
##
## On linux systems telegraf also collects protocol stats.
## Setting ignore_protocol_stats to true will skip reporting of protocol metrics.
##
# ignore_protocol_stats = false
##

# Read metrics about cpu usage
[[inputs.cpu]]
## Whether to report per-cpu stats or not
percpu = true
## Whether to report total system cpu stats or not
totalcpu = true
## If true, collect raw CPU time metrics.
collect_cpu_time = true
## If true, compute and report the sum of all non-idle CPU states.
report_active = false
[[inputs.logparser]]
## file(s) to read:
files = ["/home/pi/grafanaflux/temperature.log"]

# Only send these fields to the output plugins
fieldpass = ["temperature", "humidity", "timestamp", "ot", "oh", "CPU_Temp"]
tagexclude = ["path"]
# Read the file from beginning on telegraf startup.
from_beginning = true
name_override = "room_temperature_humidity"
## For parsing logstash-style "grok" patterns:
[inputs.logparser.grok]
patterns = ["%{TEMPERATURE_HUMIDITY_PATTERN}"]
custom_patterns = '''TEMPERATURE_HUMIDITY_PATTERN %{NUMBER:timestamp:ts-epoch}\ Temp=%{NUMBER:temperature:float} C and Humidity=%{NUMBER:humidity:float}\% and CPU_Temp=%{NUMBER:CPU_Temp:float} and ot=%{NUMBER:ot:float} and oh=%{NUMBER:oh:float}'''
##custom_patterns = '''TEMPERATURE_HUMIDITY_PATTERN %{NUMBER:timestamp:ts-epoch}\ Temp=%{NUMBER:temperature:float} %{GREEDYDATA}=%{NUMBER:humidity:float}%{GREEDYDATA} ''
timezone = "Local"
[[outputs.influxdb]]
## The full HTTP or UDP URL for your InfluxDB instance.
urls = ["http://127.0.0.1:8086"] # required

## The target database for metrics (telegraf will create it if not exists).
database = "temperature" # required

## Name of existing retention policy to write to. Empty string writes to
## the default retention policy.
retention_policy = ""
## Write consistency (clusters only), can be: "any", "one", "quorum", "all"
write_consistency = "any"

## Write timeout (for the InfluxDB client), formatted as a string.
## If not provided, will default to 5s. 0s means no timeout (not recommended).
timeout = "10s"
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
## Set the user agent for HTTP POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
## Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
# udp_payload = 512

As you can see that i have created a few inputs, cpu, net, ping, logparser as per my needs, you can chose to keep it or remove it depending on your needs. logparser input is the one which is fetching data from the logs.

It should be clear to you that influx db is going to create a db and multiple tables, one for each input. The table containing the data which we are logging will be in room_temperature_humidity table similarily cpu for cpu input.

Just to give you an idea, i’m using cpu input to collect and display system specific data, net for network details such as packets received, dropped etc.

run this in the background using the command

telegraf --config iotlog.conf &

Install Grafana:

wget https://dl.grafana.com/oss/release/grafana_6.2.2_armhf.deb 
sudo dpkg -i grafana_6.2.2_armhf.deb
sudo apt-get update
sudo apt-get install grafana
sudo service grafana-server start

This will start the grafana server and you can now access grafana on the default 3000 port. Just open up a browser and go to http://raspberry_pi:3000/ and login using the default username and password — admin, admin. If you are opening it on raspberrypi itself, you can just put http://localhost:3000/.

Once you’ve logged into Grafana, add InfluxDb as the default data source and start creating dashboards.

Add influxDb Url, which will be http://localhost:8086 if you’re running influxDb locally. And add database name as temperature. Leave everything else to its default.

Setting up the dashboards in grafana is pretty straightforward. The data will be fetched through a query which you can either write yourself or use the gUI option.

i.e for getting the mean room temperature you will do something like this:

Formulating a query

My personal dashboard looks something like this:

My current dashboard

If you have any problem setting up grafana, please refer to the guide.

In case you encounter any technical difficulty, feel free to contact me.

--

--