Running TensorFlow Lite at the Edge with Raspberry Pi, Google Coral, and Docker

Paul Klinker
Star Gazers
Published in
8 min readMar 17, 2021

--

Picture of a Google Coral USB device
The Coral USB Accelerator Edge TPU coprocessor

Like many people, I like to learn by doing and it is easier than ever to jump in and start experimenting with Machine Learning (ML). TinyML is becoming a popular way to get started with ML and one of the quickest ways is by using a Raspberry Pi. In addition to its low cost, the Pi’s GPIO pins and its camera options makes it easy to ingest data from a variety of sensors. The downside however, is the Pi is not that fast at ML as it currently lacks dedicated hardware for ML acceleration.

Google has a family of ML accelerated hardware called Coral that can help solve this performance problem. Specifically, Coral devices have built in Tensor Processing Units (TPU) which greatly accelerate ML. One of these devices is a USB stick that can be plugged into a USB 3.0 port to provide a computer with a TPU coprocessor. The Coral devices use TensorFlow Lite which is optimized for lower power edge and mobile devices.

Raspberry Pi attached to Google Coral TPU over a USB 3.0 cord.
Raspberry Pi and Google Coral — a great combination

Why Docker?

It is fairly easy to plug the Coral TPU into a Raspberry Pi or other computer and get it working directly on the hardware. However, it requires installing a lot of software…

--

--

Paul Klinker
Star Gazers

Paul is a Principal Engineer at ManTech specializing in DevOps and enterprise software development.