How do we deploy YOLOv8 on Raspberry Pi 5

Elven Kim
4 min readFeb 9, 2024

--

After trying out many AI models, it is time for us to run YOLOv8 on the Raspberry Pi 5. Let me walk you thru the process.

Here are the 5 easy steps to run YOLOv8 on Raspberry Pi 5, just use the reference github below. The summary of codes are given at the end. (The codes are from the author below).

1. Install Bookworm image

For Raspberry Pi 5, download the latest Imager and use the default 64-bit and recommended Debian 12 ‘Bookworm’. This operating system comes with Linux kernel 6.1, the current LTS (Long Term Support) release. The kernel is more stable and new features to enhance Linux system’s security and performance.

2. Download cmake for linux

CMake is a cross-platform build system generator. Projects specify their build process with platform-independent CMake listfiles included in each directory of a source tree with the name CMakeLists. txt. Users build a project by using CMake to generate a build system for a native tool on their platform.

3. Install Tensorflow and ONNX

Installing tensorflow and onnx are required for conversion yolov8 model to tflite.

We are familiar with Tensorflow but what is ONNX?

It is short for Open Neural Network Exchange, is a freely available format specifically designed for deep learning models. Its primary purpose is to facilitate seamless exchange and sharing of models across different deep learning frameworks, including TensorFlow and Caffe2, when used alongside PyTorch.

The codes are here:

conda create -n yolov8_cpu python=3.9
conda activate yolov8_cpu
pip install ultralytics==8.0.221
pip install tensorflow==2.13.1
pip install onnx==1.15.0 onnxruntime==1.16.3 onnxsim==0.4.33
pip install -U — force-reinstall flatbuffers==23.5.26

In addition, the flatbuffers is upgraded for tflite export. FlatBuffers is a cross platform serialization library architected for maximum memory efficiency. It allows you to directly access serialized data without parsing/unpacking it first, while still having great forwards/backwards compatibility.

4. Export yolov8n to tflite and onnx format

Model weights refer to the parameters learned by the neural network during the training process. These parameters include the weights and biases of the various layers in the network. Weights are the numerical values that the neural network uses to make predictions based on the input data.

We need to convert yolov8n weights to tflite and onnx format using the command below.

python export_models.py
python export_models.py — format onnx

A quick comparison of the weights is shown below of YOLOv8 weights which is 3.2MB.

And a quick comparison with YOLOv5 weights. The YOLOV8 nano is only 4MB, slightly more than YOLOv8 nano.

5. To run the tflite, use the main.py

python main.py — model=./models/yolov8n.onnx — debug
python main.py — model=./models/yolov8n_saved_model/yolov8n_integer_quant.tflite — debug

Results

Finally, we run the tflite model with main.py file

python main.py — model=./models/yolov8n_saved_model/yolov8n_integer_quant.tflite — debug

And we get a good FPS 6 for the integer quantised mode. The result is impressive, considering we use the smallest weights.

CONVERSION TO TFLITE

  1. key in the codes below, to export from the sample weights

python export_models.py

2) key in the codes below, to export from the custom weights, debug is to show video

python export_models.py — models/models/best_saved_model/best.tflite — debug

3) to just show the FPS without the video,

python export_models.py — — print_fps

Reference

Video1:Raspberry Pi 5 — Ep03 — Object Detection/Yolov8/CPU: https://youtu.be/ZebczOt90mU?si=EowvY-t1vfyaFoxr

Doc1:https://github.com/JungLearnBot/RPi5_yolov8/blob/main/Readme.RPi5.cpu.md

Code1: https://github.com/JungLearnBot/RPi5_yolov8

Summary of codes

mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh
bash ~/miniconda3/Miniconda3-latest-Linux-aarch64.sh -b -u -p ~/miniconda3
~/miniconda3/bin/conda init bash
rm -rf ~/miniconda3/miniconda.sh

#install both tensorflow and onnx to convert yolov8 model to tflite
sudo apt-get install cmake
cd RPi5_yolov8
conda create -n yolov8_cpu python=3.9
conda activate yolov8_cpu
pip install ultralytics==8.0.221
pip install tensorflow==2.13.1
pip install onnx==1.15.0 onnxruntime==1.16.3 onnxsim==0.4.33
pip install -U — force-reinstall flatbuffers==23.5.26

python export_models.py
python export_models.py — format onnx

cd RPi5_yolov8
conda activate yolov8_cpu
python main.py — model=./models/yolov8n.onnx — debug

cd RPi5_yolov8
conda activate yolov8_cpu
python main.py — model=./models/yolov8n_saved_model/yolov8n_integer_quant.tflite — debug

--

--

Elven Kim

I am a researcher in the field of Robotics, Computer Vision and Artificial Intelligence.