Building an Edge TPU powered security camera
Being inspired by the teachable machine project, we think that Weight Imprinting is an exciting technology to build smart security cameras. While integrating a dedicated Neural Network based detection model into a system can guarantee high predictive power, it also makes the system very inflexible from a user perspective. Adapting the system to new use-cases requires data gathering, re-training, and re-deploying of the model. Steps which are not only time consuming, but also require expert-knowledge. Using Weight Imprinting, on the other hand, lets the system easily adapt to new use-cases while still ensuring system accuracy. To give you an idea here are a few possible use-cases where such a system could be used:
- Detect if someone enters your apartment.
- Detect if your parking lot is being blocked.
- Detect if your pets are leaving the house.
We wanted to build a system which is not only easy to configure but which is also integrated into an existing eco-system. That’s why our camera is compatible with Apple HomeKit. Before getting more into detail about the project let’s get a rough overview of the exciting piece of hardware we are using throughout the project: the Coral Edge TPU.
Coral Edge TPU
The Edge TPU comes in two flavors. You can either buy a self-containing development board or a USB accelerator (we’ll be using this one) which connects to existing systems like a Raspberry Pi or a PC. The magic behind Edge TPU is its custom-designed ASIC by Google which enables you to do high-performance inferencing at a competitive price point ($80). Below is a performance chart. As one can see, the Edge TPU easily outperforms Desktop CPUs. Taking into account the low price point this is a pretty attractive package.
Edge TPU only supports TensorFlow. To run your models, you need to do three things:
- Train your model using TensorFlow. Be aware: during Beta, Coral only supports a few network architectures.
- Convert and quantize the model to the
- Finally, compile it using the web-based Edge TPU model compiler.
Having to upload a model to a Google server before being able to use them on the device may be a deal-breaker for some. Hopefully, Google will provide an offline converter in the future.
Using the Edge TPU
Setting up the Coral on a Raspberry Pi is very simple. Just run the install script and you are done:
tar xzf edgetpu_api.tar.gz
Keep in mind that not all hardware architectures are supported. When trying to install the package on a Raspberry Pi Zero W we got a
“platform not supported“ error. The Python API is in an early development stage. Only the most crucial things are implemented. The documentation is somewhat scarce.
Luckily, doing Weight Imprinting on Coral is straight forward! The Python library offers a high level of abstraction. You only need to write a couple lines of code. A toy example is outlined below.
# Imprint weights and save model
from edgetpu.learn.imprinting.engine import ImprintingEngine
train_dict = <YOUR TRAINING DATA>
engine = ImprintingEngine(<YOUR EMBEDDING EXTRACTOR>)
label_map = engine.TrainAll(train_dict)
# Now use it on new pictures
from edgetpu.classification.engine import ClassificationEngine
The first step is to load an appropriate embedding extractor into the
ImprintingEngine. You can download a MobileNetV1 based extractor here. Next, we preprocess our training data by creating a dictionary where keys are class names and values are lists of flattened NumPy arrays of resized images. Imprinting the weights is just one line of code:
engine.TrainAll(). The resulting and saved model can then be consumed by the
ClassificationEngine to make predictions on pictures.
Right now, one can only create a new classification model based on an embedding extractor. Extending an existing classifier with new classes using Weight Imprinting is not possible.
A Coral powered security camera
Our goal was to come up with a prototype which only requires few components:
- Raspberry Pi 3 B+
- Raspberry Pi v2.1 CSi Camera
- Coral Edge TPU USB Accelerator
- (Optional) for enclosure: Makerbeam aluminum profiles and some custom designed 3D printed parts
To set up the camera, you only need to clone the repo linked at the end of this article. Install the necessary Python packages and start the application. Enter the device key displayed in the terminal to add the camera to your iOS home app. Next, open the web GUI and provide some examples of things it should detect. Also, don’t forget to add some background pictures where no alarm should be triggered. Finally, click on the imprint weights button and the system is ready to go! Below you can see the teaching process in action. In this example, we trained the system to detect if our favorite Pedelec gets taken out of the office.
Now let’s check if the system works as intended. As you can see the alarm is triggered as soon as the Pedelec leaves the area of view and a message is pushed to our iPhone. If the model does not work as intended: no worries! Collect some additional examples and imprint the weights again.
Final thoughts and practical evaluation
We tested the camera on a couple of other tasks and were pretty impressed by the overall accuracy. The system does a great job in detecting approaching people, opened windows or blocked parking spots.
Weight Imprinting is of course not the solution for every Machine learning task. But if flexibility and on-device training are vital to you then take a look at it: it is worth investigating! We‘ll publish the source code on Github in the next coming days. Also, feel free to share your own Edge TPU based projects in the comment section below. 🤓