Compiling TensorFlow Lite for a Raspberry Pi

Harald Fernengel
3 min readFeb 26, 2018

--

The team behind TensorFlow recently released a “Lite” version of their open-source machine learning library. Unfortunately, the documentation only talks about iOS and Android, but not how to make it work on another embedded device, like my little Raspberry Pi board. After a few hours of toying with build systems and cross-compiling, I managed to get the label_image example to run on my Raspberry Pi Zero W. Here’s a quick walkthrough.

Note — Since I’m using a Mac, the next chapter is Mac OS X specific. If you’re on Windows or Linux, please follow one of the many existing guides to get a cross-compiler and a corresponding CMake toolchain and skip over the next chapter.

Setting up a cross-compiler on a Mac for a Raspberry Pi

The first thing we need is a cross-compiler and a corresponding toolchain file that tells CMake how to invoke it.

Of the many ways to cross-compile for the Raspberry Pi, I found the walkthrough at https://medium.com/@zw3rk/making-a-raspbian-cross-compilation-sdk-830fe56d75ba the most convenient. It’s quick to set up and requires neither custom disk images nor bootstrapping your own compiler. Note that I used the latest LLVM and binutils, which at the time of writing were LLVM 5.0.1 and binutils 2.30.

Once we have our Raspberry Pi SDK, we need a corresponding CMake toolchain file for cross-compiling. I wrote a quick one here:

Note — if you’ve downloaded another LLVM than 5.0.1, adapt the paths above accordingly. Save the file to rpi-toolchain.cmake and set the environment variable RPI_SDK to point to the path where you installed the Raspberry Pi SDK, for example:

export RPI_SDK=$HOME/raspbian-sdk

Compiling TensorFlow Lite

Now comes the fun part — compiling the TensorFlow Lite library and the example. TensorFlow itself is using the Bazel build system, which, while I’m sure has its reason of existence, isn’t very wide-spread yet, so rather than trying to get Bazel to do what I want, I found it easier to add a couple of CMakeLists.txt to get a cross-build. The code can be found at my private fork at https://github.com/haraldF/tensorflow/tree/cmake.

First, download the source code:

git clone https://github.com/haraldF/tensorflow.git
cd tensorflow && git checkout -b cmake origin/cmake

In addition to the TensorFlow Lite code itself, some third-parties are needed. In order to get those, there’s a convenience script that we can run:

./tensorflow/contrib/lite/download_dependencies.sh

Now, we’re ready to build:

cd tensorflow/contrib/lite
mkdir rpi-build && cd rpi-build
cmake -DCMAKE_TOOLCHAIN_FILE=/path/to/rpi-toolchain \
-DCMAKE_BUILD_TYPE=Release ..
make -j4

Replace /path/to/rpi-toolchain with the CMake toolchain file for your cross compiler and -j4 with the amount of CPU cores your machine has. After a short coffee break, you should now have a file called libtensorflow-lite.a which is a static library containing the TensorFlow Lite code and the example atexamples/label_image/label_image. Copy the example and the reference bitmap (examples/label_image/testdata/grace_hopper.bmp) to your Raspberry Pi.

The example also needs the mobilenet model. You can find the link in the TensorFlow Lite README.md file (link). At the time of writing, the example used mobilenet_quant_v1_224.tflite. Download, unzip and copy the model to your Raspberry Pi. If everything went well, you should have the following files on your Raspberry Pi:

grace_hopper.bmp
label_image
labels.txt
mobilenet_quant_v1_224.tflite

Now we can finally run the TensorFlow Lite example by invoking:

./label_image

On my Raspberry Pi Zero W, it takes about 5 seconds to run, with the following output (The error about opening libneuralnetworks.so can be safely ignored):

nnapi error: unable to open library libneuralnetworks.so
Loaded model ./mobilenet_quant_v1_224.tflite
resolved reporter
invoked
average time: 5600.14 ms
0.666667: 458 bow tie
0.290196: 653 military uniform
0.0117647: 835 suit
0.00784314: 611 jersey
0.00392157: 922 book jacket

If you’ve come this far — congrats, you can now run TensorFlow Lite on your RPi :)

--

--