BlackMirror — How I made my own smart mirror powered by Raspberry Pi and AndroidThings (part I)

Przemek
4 min readJun 22, 2020

--

Mirror displaying “welcome” in polish 🇵🇱

The idea was simple, make a smart mirror controlled by voice and display on it useful information like weather, news, calendar, date and time. This project is completely open source, so with a little effort you can make your own too! I will show you how you can do that by describing the most important steps in general. The project only supports the Polish language so far, but don’t worry I will explain to you how to configure it to use your own language. Since I’m a huge fan of the TV series Black Mirror, so guess what I called it? :)

If you like to dive into code at first, there is GitHub repository for your reference: https://github.com/hypeapps/black-mirror

Hardware

  • Raspberry Pi 3b+
  • Well deserved, found in the attic LCD monitor COMPAQ 17'
  • HDMI to DVI-D adapter — only way to connect such old monitor by digital cable to raspberry
  • Mini USB microphone ANBES — working out of the box with raspberry and AndroidThings, there are plenty of them you can find them on aliexpress.com. They cost little more than one dollar!

Hardware configuration

The only thing I needed to configure was the screen resolution. To do this you have to mount your SD card with your AndroidThings image and set custom HDMI mode in config.txt:

hdmi_cvt=<width> <height> <framerate> <aspect> <margins> <interlace> <rb><width> width in pixels
<height> height in pixels
<framerate> framerate in Hz
<aspect> aspect ratio 1=4:3, 2=14:9, 3=16:9, 4=5:4, 5=16:10, 6=15:9
<margins> 0=margins disabled, 1=margins enabled
<interlace> 0=progressive, 1=interlaced
<rb> 0=normal, 1=reduced blanking

It is worth noting that my monitor is supporting a non standard resolution so I believe you probably won’t have to set this and it will be working out of the box. My config looks like this:

hdmi_cvt=1360 720 60 1 0 0 0

AndroidThings

The ease and power of Android

AndroidThings is a great platform for the project like this. You can write code the same as for a normal android application and run it on a credit card sized minicomputer. Pretty cool, and a thing worth trying for every android developer. I won’t describe how to install and configure it on your device because I won’t do it better than the complex “get started” section in the documentation: https://developer.android.com/things/get-started and the self explanatory installer on the first launch of your device.

Connect to ADB

To push your build to AndroidThings you have to set up an android debug bridge, you can’t just plug in a usb cable. The easiest way to do this is to connect your computer and raspberry to your local wi-fi network. You have to know the device ip address which will be printed on the main screen after you have connected to the LAN. On your computer terminal call:

$ adb connect <device ip address> 

And that’s all, now you can use Android Studio to push your build to raspberry.

Speech recognition

wake up mirror

Similar to “Alexa” or “Ok Google”, our smart mirror should always be listening for the “wake up mirror” key phrase. Once detected, the device should recognize the given command, and use Text-To-Speech to give you results.
For the speech recognizing service I decided to use the google cloud platform solution — Google Cloud Speech, but this idea comes with one major problem: it costs… not very much, but continuously searching for our key phrase will mean our bill quickly mounts up. After some digging I found an open source project called PocketSphinx: https://cmusphinx.github.io and simple demo android app: https://github.com/cmusphinx/pocketsphinx-android-demo

PocketSphinx — the open source way

This solution comes with a very convenient API for our use. It even supports detecting a key phrase, so you don’t have to worry about implementing logic for detecting. You have only to configure SpeechRecognizerSetup:

recognizer = SpeechRecognizerSetup.defaultSetup() .setAcousticModel(new File(assetsDir, “en-us-ptm”))  .setDictionary(new File(assetsDir, “cmudict-en-us.dict”)) .getRecognizer();

And decide what your key phrase is:

private static final String ACTIVATION_KEYPHRASE = "wake up mirror"
private static final String WAKEUP_ACTION = "wakeup_mirror_action";
recognizer.addKeyphraseSearch(WAKEUP_ACTION, ACTIVATION_KEYPHRASE);

To get notified that your key phrase was detected, you only have to implement RecognitionListener and set it:

recognizer.addListener(this);

Your output will result in two events with self explanatory names:

public void onPartialResult(Hypothesis hypothesis);
public void onResult(Hypothesis hypothesis);

Hypothesis contains our speech-to-text prediction:

String text = hypothesis.getHypstr();

Now we have key phrase detecting! Worth noting that it’s not a perfect solution and sometimes you will not be recognized properly, but it’s completely free, it works offline and you can even create your own acoustic model and language dictionary.

I strongly recommend looking at my implementation to see the big picture: https://github.com/hypeapps/black-mirror/blob/master/app/src/main/java/pl/hypeapps/blackmirror/speechrecognition/sphinx/PocketSphinx.java
I omitted the boring implementation details on purpose.

Google Cloud Speech — the cloudy way

Now that our key phrase is detected, we can switch our context and try to recognize a command. For this task I needed something more accurate than PocketSphinx, so I decided to use Google Cloud Speech.

To be continued in Part II

--

--