FAB Project: IoT Automated Intercom

Shu Takahashi
Unicorn Digest
Published in
8 min readJul 23, 2020

How this Project Began

One of my first world problems is having to get up from my chair and answering the door through an intercom located in another room. It’s just another hassle to get up and run to the intercom then to run to the door again to open the door. With the pandemic limiting my outings, I found myself ordering Uber Eats or other similar takeout services more frequently.

A while back, when I was playing around with the IFTTT website, I discovered that I can make a https request to control my Switchbot (A mechanical switch controller I bought on Amazon). I wanted to answer my door without getting to the living room to press the button on the intercom. I was also looking for a fun and silly project to keep my 3D modelling and coding skills up to date. For these reasons, I decided to IoTify (verb. to turn an everyday item into an IoT device, a term I just came up with) my intercom.

Tools and Materials

I had been using the Switchbot device to control the oil heater, but since it was now summer, I no longer needed it. Hence, I decided to reuse these devices along with my Raspberry Pi, the camera module, and a few other audio components I had lying around.

Other than the materials, I used my trusty UP Mini 3D Printer, my computer with Fusion 360, the windows terminal (the new version which supports SSH) and FileZilla installed as well as a few basic electronics tools.

Components used in this project

Software Side

Overview

There are two main scripts running on the Raspberry Pi. One is live broadcasting series of images (motion JPGs) from the camera module pointed at the monitor of the intercom, and the other is running the script responsible for making https requests and communicating with the app. Both scripts are written in python.

For the latter script, I used a library from Blynk. Blynk is an IoT platform designed enable internet enabled microcontrollers to be controlled by its Blynk app. Basically, using this platform’s mobile app, I can control my python script easily without writing additional code for my phone. I used this previously to make my IoT Bag Manager and I highly recommend it.

For the most part, I used SSH to talk to the Pi from my PC, but everything described here is doable with the Raspberry Pi, a monitor and a keyboard as well.

Live Broadcast Script

Before I open the door, I need a way to verify that the person is actually a delivery person. For this, I will be using the Raspberry Pi camera module. It will live broadcast the monitor of my intercom through my Raspberry Pi. For this code, I used an example code I found from the official PiCamera package documentation.

Before, running this code, make sure to enable the camera module. You can enable the camera module by following the steps below:

pi@raspberry:~ $ sudo raspi-config
  • Then navigate to Interfacting Options (5)
  • Then enable the Camera (P1)

Create a new file called rpi_camera_surveillance_system.py:

pi@raspberrypi:~ $ nano rpi_camera_surveillance_system.py

Then use the code below:

Run this script by typing:

pi@raspberrypi:~ $ python3 rpi_camera_surveillance_system.py

Then to view the live broadcast, open a web browser and type in this address: http://[RPI IP Address]:8000

You can find the Raspberry Pi’s IP address by:

pi@raspberrypi:~ $ ifconfig

Setting Up the Blynk App

To control the IoT intercom device, I will be using the Blynk app. This app is available for both iOS and Android. Sign in (or create a new account) and then create a New Project. Name the project to be whatever you like, select the device to be Raspberry Pi, and set the connection type to WiFi. After you create a new project, the Blynk app will give you an AUTH TOKEN code. You will need this later so email this to yourself.

Inside the new project, you can place new widgets. Widgets can add new functionalities into the app. For this project add the following:

  • Video Stream (to view the live broadcast from rpi_camera_surveillance_system.py/)
  • Three Styled Buttons (to control the door or to initiate the door answering sequence)

In the Video Stream settings, set the URL ADDRESS to http://[RPI IP Address]:8000/stream.mjpg. After the widgets are added, the home screen of your project should look like this:

To view the live broadcast through the app, tap the run button on the right top of the app. If the rpi_camera_surveillance_system.py script in running, you should be able to see the live broadcast inside the Video Stream box.

Setting Up the Switchbot

I will be using IFTTT to make a https request link to control the two switchbots. The ‘trigger’ will be the IFTTT Webhook and the ‘action’ will be the Switchbot. You will need to ‘connect’ Webhook and Switchbot to IFTTT when you first sign in. I will assume that you have Switchbot account setup and the devices connected to the internet.

On the IFTTT website, navigate to Create then click Applets. Click the Add button and select ‘Webhooks’. Choose ‘Receive a web request’ then name the event. For this project, I created two events called open_door and answer_door. After you give it a new name, you will be directed to the previous page. Click the second Add button and select ‘Switchbot’. Choose the ‘Bot press’ option then select your connected Switchbots. Click ‘Create Action’ then ‘Finish’.

Blynk Script

This script will be responsible for making an audio announcement (to answer the door) and making https requests to open the door. In the code there is a section which says [BLYNK_AUTH]. This is where the Blynk Auth Token should be pasted. Also, change the [YOUR KEY] on lines 11 and 12 to your IFTTT Webhook Key. You can get this key by navigating to https://ifttt.com/maker_webhooks then to Documentation. The audio files (in mp3 format) to answer the door should also be placed in the same directory and named accordingly.

In this script, there are basically three defined functions. Each of them are set of instructions to what to do when each of the Blynk Styled Button is pressed. The second function under line 23 for example will make a https request to press the ANSWER button on the intercom, play the recording of myself answering the door and then finally make another https request to press the UNLOCK button on the intercom.

Run this script by typing:

pi@raspberrypi:~ $ blynk_main.py

Hardware Side

Overview

The hardware side is pretty simple for this project. The Raspberry Pi is connected to the camera module and an external speaker. To hold all the components together, I designed plastic brackets in Fusion 360.

Setting Up the Speaker

I used a 3W 50mm diameter speaker connected to an amplifier. You should be okay without the amplifier, but I used it so I can easily control the volume. I couldn’t find a make audio jack that could be soldered to the amplifier, so I improvised using three adapters I found. Obviously, if you have the appropriate parts on hand, you should solder the audio jack cable directly to the amplifier board.

Brackets to Hold the Parts Together

I designed brackets to hold the speaker and the camera together. This bracket will point the speaker and the camera in the right direction. I designed it to be easily adjustable with three 4 mm machine screws.

You can find the 3D files here: https://grabcad.com/library/iot-intercom-brackets-1

After 3D printing, I had to make a few adjustments. Most notably, the holes for the machine screws were too small so I used my drill press to make them bigger.

Putting It All Together

To attach the 3D printed brackets with the camera and the speaker attached, I used a double sided sponge tape. I also used tapes to hold the cables to the intercom. Also, I attached the two switchbots to the intercom using mroe double sided sponge tape as well.

Finally, to use the device, run the two scripts on the Raspberry Pi simultaneously. You can use the following command to run a python file in the background, so that two python files can be run at the same time.

pi@raspberrypi:~ $ python3 rpi_camera_surveillance_system.py &

Conclusion

This project was relative straightforward but still really fun. I’m now ready, more than ever to become a couch potato this summer. But it’s a good thing, I can sit back and work on more projects like this!

The live broadcast has a delay of around eight seconds, and sometimes, it will the video will freeze. I tried playing around with the framerate of the camera, but I didn’t have much luck. Sometimes, the delay was so significant that the delivery person had to call the door twice. This will be something I need to fix to make it more practical. If you have any suggestions on how this could be done, please let me know.

In addition to this problem, the Switchbot seems to press the bottom button on the intercom with quite a bit of force, and I could not find a setting page to adjust this. If I want to make this project more practical, I will need to think of ways to resolve this issue as well.

In terms of future improvements, I would like to add a two-way audio feature so that I can do more than just answer deliveries. It would also be nice, if I could add a facial recognition system to automate the whole system. However, the problem with this is that, the delivery person is different every time especially for Uber Eats so I can’t just make a list of people to automatically open the door for.

--

--