Integrating Snips with Home Assistant
With the rise of Amazon Echo and Google Home, we are getting increasingly accustomed to controlling our home via voice. But our home is also our most intimate space, and we have to think twice before installing an always-listening, cloud-connected device streaming our data to some distant server.
Cloud-connected devices have already had disastrous consequences for parents in Germany. And in the US, where more and more devices depend on remote servers to function, we’ve seen potentially dangerous situations arise.
At Snips, we are challenging the idea that adding Artificial Intelligence to our everyday life means giving up Privacy. Some things are just not meant to happen online, especially when it involves things we are speaking. This belief is why we recently launched the Snips Voice Platform, a 100% on-device alternative to Amazon Alexa, allowing anyone to easily add powerful voice assistants to connected devices without compromising their Privacy.
In this article, we will show how to control housing lights by voice using Snips and Home Assistant, without requiring an Internet connection.
The Snips Voice Platform
The Snips Voice Platform features Hotword Detection, Speech Recognition, Natural Language Understanding and Dialog Management.
Similar to Amazon Alexa or API.ai, Snips takes voice or text as input, and produces intents as output, which are explicit representations of an intention behind an utterance, and which can subsequently be used by the system to perform appropriate actions. But instead of running the ASR and NLU in the cloud, Snips runs entirely on the Raspberry Pi, with blasting performance!
Here is an example of Snips in action:
Setting up Snips and Home Assistant
We will be using the following hardware:
- Raspberry Pi 3 ($39 on Adafruit)
- SanDisk 16GB Micro SDHC ($10 on Amazon)
- A standard USB microphone ($15 on Amazon)
- Philips Hue Starter Kit ($69 on Amazon)
Setting up a Raspberry Pi with Home Assistant is straightforward. For a detailed tutorial, see the Getting Started Guide.
In this post, we will use voice commands to trigger Philips Hue lights. They are well integrated into Home Assistant. We simply need to add the following entry to
Check the Philips Hue Getting Started Guide for help obtaining the IP address of your Hue bridge.
The Snips Voice Platform is equally straightforward to set up:
(pi) $ sudo apt-get updatesudo bash -c 'echo "deb https://raspbian.snips.ai/$(lsb_release -cs) stable main" > /etc/apt/sources.list.d/snips.list'
(pi) $ sudo apt-get install -y dirmngr
(pi) $ sudo apt-key adv --keyserver pgp.mit.edu --recv-keys D4F50CDCA10A2849
(pi) $ sudo apt-get update
(pi) $ sudo apt-get install -y snips-platform-voice snips-watch
Bridging Snips and Home Assistant
Messages between Snips and Home Assistant are passed via MQTT. We must tell Home Assistant which MQTT broker to use by adding the following entry to
As a convenience, Snips can run an MQTT broker on port 9898. So if we wish to use this broker, the entry will look as follows:
If we wish to use an external MQTT broker, we must change these values accordingly. Furthermore, we must tell Snips not to start its own broker but rather use an external one. This is specified in the launch parameters, as explained in the “Running the assistant” section below.
Next step: creating our Snips Assistant!
Creating a Snips Assistant
We head over to console.snips.ai to build our assistant. We choose Raspberry, English and Snips On-Device ASR.
Now, we could go on and create intents for light queries manually. This involves adding a few sample phrases such as “Turn on the lights” and tagging them. However, Snips already offers intent bundles, which are collections of intents pre-trained on larger datasets and thoroughly tested. For our purposes, we choose the “IoT” intent bundle, which includes intents to control the lights, but also the heating, television, speakers and more.
We then click Save, and launch the re-training of the assistant. Once done, we can download the assistant model.
Installing the assistant on the Pi
assistantproj_XXX.zip file will be downloaded. Unzip it, and copy the folder to our Pi (replace
<PI_HOSTNAME> with the actual device hostname):
$ scp -r <PATH_TO_ASSISTANT_FOLDER> pi@<PI_HOSTNAME>.local:/home/pi/assistant
Now log in to your Pi, and move the folder to
/usr/share/snips/assistant and restart the Snips platform:
(pi) $ sudo mv /home/pi/assistant /usr/share/snips/assistant
Running the assistant
Make sure a microphone is plugged to the Pi and correctly detected. If you are having trouble setting up audio, we have written a guide on Raspberry Pi Audio Configuration. Snips is restarted as follows:
(pi) $sudo systemctl restart snips*
Snips is now ready to take voice commands from the microphone. To trigger the listening, simply say
followed by a command, e.g.
“Turn the lights green”
We should see the transcribed phrase in the logs, as well as a properly parsed intent. Last step is now to have Home Assistant react to these intents.
Optional: specifying an external MQTT broker
As explained earlier, by default Snips runs its own MQTT broker. But we can also tell Snips to use an external broker by specifying this when launching Snips. In this case, edit the file
/etc/snips.toml on your device, uncommenting the
mqtt line in the
[snips-common] section, replacing
<BROKER_PORT> with appropriate values:
mqtt = "<BROKER_HOSTNAME>:<BROKER_PORT>"
For more details on launch options, check the documentation on Snips Platform Commands.
Adding Home Assistant triggers
As mentioned above, the output of Snips is an intent. For instance,
“Turn the kitchen lights blue”
will output the following intent:
"text": "Turn the kitchen lights blue",
We now have to tell Home Assistant how to handle such intents. This is done via actions, defined in
configuration.yaml, using the Home Assistant Script Syntax. For instance, to handle the above intents and trigger a light color change in a given room, we add the following entry:
- service: light.turn_on
| replace(" ","_") }}color_name:
What is happening here is that the Snips custom component will invoke the script as defined in the
action field, passing the intent slots as parameters. The templating system used by Home Assistant allows us to perform more complex actions involving conditionals, loops, states, and more. For a detailed explanation, see the Templating Guide.
That’s it! We now have a voice-controlled Home Assistant setup running entirely on-device! A sample configuration file can be found on Github. We invite you to contribute and make it better! Comments and suggestions are welcome, just ping us on Discord!
If you enjoyed this article, it would really help if you hit recommend below :)
If you want to work on AI + Privacy, check our jobs page!