How to export a Yolov7-tiny model via onnx to TensortRT on a Jetson Nano 4GB

Sourabh Jigjinni
5 min readMay 7, 2023

--

This article as of May 2023, is a (basic) guide, to help deploy a yolov7-tiny model to a Jetson nano 4GB.

I’ve used a Desktop PC for training my custom yolov7tiny model. Here ill demonstrate the steps to deploy an existing tiny model. (Not custom). For custom training a model there are plenty of resources online, mentioned below.

The issue we have here is that the jetson nano, runs jetpack 4.6.3, so we cannot access the latest libs on the nano.

On my desktop PC

1. Setup yolov7

Were using the yolov7 repository here.

https://github.com/WongKinYiu/yolov7

mkdir ~/Desktop/myyolo

cd ~/Desktop/myyolo

git clone https://github.com/WongKinYiu/yolov7.git

Now well use nvidia-docker for a quick way to get yolo running

nvidia-docker run --name yolov7 -it -v ~/Desktop/myyolo/yolov7:/yolov7 --shm-size=64g nvcr.io/nvidia/pytorch:21.08-py3

# :yolov7 means that in the container we have access to ~/Desktop/myyolo/yolov7, with the alias of yolov7

# to access this container again we can start it by name
# docker start -i -a yolov7

it loads /workspace
to access our folder
cd ../yolov7
this is where our yolo repo is

# it loads /workspace
# to access our folder
# this is where our yolo repo is
cd ../yolov7

pip install -r requirements.txt

2. Lets download a pre-trained yolov7-tiny model

# download yolov7-tiny weights
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt

Lets run a quick test on our Desktop PC

# detect on a test image
python detect.py --weights yolov7-tiny.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg

In my case this gave me an error, to solve it I had to uninstall opencv-python and install an older version of opencv-python-headless, as mentioned in this issue.

# check what opencv version we have
pip list | grep opencv
# result :
# opencv-python 4.7.0.72

# uninstall non-headless version
pip uninstall opencv-python

# as per this issue we need an older version of opencv-python-headless
# https://github.com/WongKinYiu/yolov7/issues/959
pip install "opencv-python-headless<4.3"

#check it
pip list | grep opencv
# opencv-python-headless 4.2.0.34

Lets run a quick test now

python detect.py --weights yolov7-tiny.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg

# The image with the result is saved in: runs/detect/exp/horses.jpg
Works as expected

At this stage if you want to train a custom model, follow the steps here
https://learnopencv.com/fine-tuning-yolov7-on-custom-dataset/
Skip the installation, and make sure ur images are accessible from the ~/Desktop/myyolo/yolov7 folder
use docker start -i -a myolov7 to activate the yolo environment.

3. Export our pt weights to the onnx format

According to this issue, we cannot use NMS at the moment.

https://github.com/Linaom1214/TensorRT-For-YOLO-Series/issues/70

# export without end2end 
python export.py --weights yolov7-tiny.pt --grid --simplify

we may get these errors, inspite of the onnx file getting generated correctly

Simplifier failure: No module named 'onnxsim'
Import onnx_graphsurgeon failure: No module named 'onnx_graphsurgeon'
CoreML export failure: No module named 'coremltools'

I installed the missing libs, onnx_graphsurgeon did not get installed.

pip install onnxsim coremltools

But this broke some versions of numpy, we can always revert to the yolov7 state by pip install -r requirements.txt

I ran the export script again

# export without end2end 
python export.py --weights yolov7-tiny.pt --grid --simplify

Either ways we have a usable onnx file now, even if u did not install these missing libs and rerun.

The script will output the saved path for the onnx file.

Now on to the Jeston nano.

On my Jetson Nano (4GB)

As per this page, the jetpack version for the jetson nano is 4.6.3
https://developer.nvidia.com/embedded/jetpack-archive

Download it from here
https://developer.nvidia.com/jetpack-sdk-463

Follow the steps here to flash the os to an sd card
https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit#write

Copy the onnx file from PC to the nano. I put it on my desktop.

1. Convert the onnx model to trt

We will convert our onnx to trt using a program called trtexec.

It is already installed on our jetson nano
Find its location

sudo find / -name trtexec
find: ‘/run/user/1000/gvfs’: Permission denied
find: ‘/run/user/120/gvfs’: Permission denied
/usr/src/tensorrt/bin/trtexec
/usr/src/tensorrt/samples/trtexec

Its located at /usr/src/tensorrt/bin/trtexec

convert the model

# my onnx file is on my desktop
cd ~/Desktop
/usr/src/tensorrt/bin/trtexec --onnx=yolov7-tiny.onnx --saveEngine=yolov7-tiny.trt --workspace=3000 --explicitBatch --fp16

2. Setup for inference

Clone this repo, im cloning this on my desktop (nano)

https://github.com/Linaom1214/TensorRT-For-YOLO-Series

cd ~/Desktop
git clone https://github.com/Linaom1214/TensorRT-For-YOLO-Series.git

cd TensorRT-For-YOLO-Series/

To install the dependencies, we need to sort out some paths

Run this to update our bashrc file

echo 'export CPATH=$CPATH:/usr/local/cuda-10.2/targets/aarch64-linux/include' >> ~/.bashrc 
echo 'export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda-10.2/targets/aarch64-linux/lib' >> ~/.bashrc

source ~/.bashrc

We need pycuda to run inference

# pycuda failed the first time around, because of a missing header file
# dont forget this step
sudo ln -s /usr/include/locale.h /usr/include/xlocale.h

# this takes time
# actually everthing on the nano takes time :)
pip3 install pycuda

# if u dont have pip, install it
# sudo apt-get update && sudo apt-get install python-pip python3-pip

I copied trt file to TensorRT-For-YOLO-Series/ dir

3. Finally inference

python3 trt.py -e yolov7-tiny.trt  -i src/1.jpg -o yolov8n-1.jpg
yolov8n-1.jpg
python3 trt.py -e yolov7-tiny.trt  -i src/2.jpg -o yolov8n-2.jpg
yolov8n-2.jpg

Thanks!

--

--