Part-1: 93% accurate DNN on $5 Raspberry-Pi-Zero-W
Running the state-of-art with highest accuracy Deep Neural Network(DNN) is always a challenging job when it comes to resource constrained compute machine such as Raspberry-Pi(RPi) series.
In this 3 part series of post, I will first show that how 93% accurate CIFAR-10 trained DNN can be even run on $5 RPi-Zero-W. Here is the specs of all the RPi family of compute machine
RPi-Zero-W which cost $5 is having the minimum compute resource and is running Arm A11 @1GHz consuming 20mA@5V(idle) making it a perfect candidate for low power edge devices for IoT.
But running a 93% accurate cifar-10 trained DNN with reasonable frame-per-second(FPS) processing time seems a challenging task on RPi-Zero-W.
In this series of post, I will show that using AiOTA-Labs emDNN technology one can run the state-of-art classifier and detector achieving decent FPS processing time with no need of additional hardware such as Intel Movidius NCS.
Part1 will cover: Running 93% accurate CIFAR-10 ResNet-18 classifier with emDNN technology on Pi-Zero-W
Part2 will cover: Running state-of-art detector such as Faster-RCNN, SSD, RetinaNet on Pi-Zero-W with emDNN technology
Part3 will cover: Running state-of-art classifier and detector on Pi-B+ with emDNN technology
Part-1:Running 93% accurate CIFAR-10 ResNet-18 classifier with emDNN technology on Pi-Zero-W
Before I proceed let me whole heartily acknowledge Adrian Rosebrock of Pyimagesearch for the exceptionally brilliant teaching on configuring RPi and how to kick start your first DNN on RPi B+
So what is AiOTA Labs emDNN technology.
emDNN technology is not a new deep neural network but is a compression technology which can compress any deep neural network.
We have shown in our previous series of blog here , and here that how we can compress any deep neural network ranging from 40X to 2.5X compression without losing any accuracy from baseline. emDNN was also able to compress MobileNet-V2–1.4 by 2.5X which is reported here.
It is worth highlighting that emDNN compression is orthogonal to quantization and pruning and a detail report on major differences between the various compression technology available versus emDNN can be found here . There are other reports which suggest that all the other compression technology like deep-compression or framework level compression such as depth-wise separable convolution theoretically gives plentiful of gain but are not hardware friendly (MAC efficiency is very poor and/or memory intensive). Other compression technology compresses the network but at the expense of reducing the accuracy. Fortunately emDNN doesn’t suffer from any of these.
In order to show how our compression technology without losing accuracy runs on resource constrained hardware we decided to run emDNN compressed DNN on RPi’s. Table 1 is showing the result of running original ResNet-18 versus emDNN ResNet-18 on cifar-10 on RPi-Zero-W. It is worth highlighting that these results are obtained without any hardware acceleration available within RPi-Zero-W such as neon-SIMD or GPU and used straightforward scalar machine ARM A11.
Now it is the time to see how the classifiers work on real-time camera feed. We configured the Pi-camera with RPi-zero-w but unfortunately the RPi-zero was only able to capture the camera feed @ 7FPS with 320x240 resolution.
Anyway here is the video of Pi-Zero-W running emDNN-ResNet18 with 7–8FPS. I’m not showing any video for original ResNet-18 but the FPS achieved on this original baseline DNN was 1.5FPS.
In part-2, I will show how the various detector such as Faster-R-CNN, SSD or RetinaNet performs with and without our emDNN technology on RPi-Zero-W. Stay tune for further updates. Till than have safe and enjoyable day!!
If you are interested to run emDNN technology on various platform such as tensorflow, caffe, keras,pytorch or on RPi please go to our github page here where you can download emDNN compressed DNN.
You may also want to get in touch with us at email@example.com and do pay a visit to our website www.aiotalabs.com