Jetson Nano Custom dataset testings with TF and TensorRT approach

toborobot ru
1 min readJun 1, 2019

--

Checked two approaches for image detection and classification with custom trained Insulator detection network (based on SSD_Mobilenet) — with Tensorflow and TensorRT libs on Jetson Nano NVIDIA Devboard.

Results is strange and need to understand why such things happens — Tensorflow is faster then TensorRT.

Testing with tensorflow frozen graph give about 0.07sec per one image (~15FPS)

Testing with TensorRT frozen graph give about 0.09 sec per image (~11FPS) Loading of TF program to memory — about 15 seconds

Loading TRT program for execution about 200 seconds

Some problems with TRT approach is could be the training of base tensorflow graph which is used for TRT have done with different sized images ( not 224x224 or 300x300 ) — but thisis not definitely — need to understand later

Also recognition of TensorRT graph have bad results with some amount of insulators at the picture and small insulators

https://github.com/toborobot/JetsonNanoInsulatorDetection

--

--