Regaining the #1 Position in Deep Neural Network

AiOTA LABS
3 min readJun 7, 2018

--

Image credit: moneymanagement.com

Google recently announced their breakthrough deep neural network (DNN) MobileNet-V2. It has outperformed all the available DNN technology in terms of processing speed, memory requirement and low power while achieving state-of-art accuracy (only 1.5% degradation from the most accurate one ResNext-101–64x4d). A snap-shot of MobileNet-V2 performance with the present state-of-art DNN is below

MobileNet-V2 comparison with other state-of-art DNN ( taken from MobileNet-V2 paper)

Indeed it is commendable, non-trivial performance by Google cool researchers which has also displaced AiOTA Labs emDNN technology from its #1 position (series of blog-post on medium by AiOTA Labs).

No harm to be pushed back to #2 position from a more deserving candidate. But as they say, take one step back for a better leap.

AiOTA Labs went into hibernation mode along with their math books for one month to regain the lost #1 position. Below table is showing the outcome of our 1 month of intense research. Comparison is only shown with MobileNet-V2 as it has already outplace all its nearest rival.

Comparison Table of AiOTA Labs emDNN technology with MobileNet-V2-X. Inference time/img or FPS is from large single core on Pixel-1

AiOTA Labs emDNN technology comfortably outperformed on all the parameters of MobileNet-V2 while gaining more accuracy and producing real-time processing (more than 30 FPS) and this in turn bring us back to #1 position!!!!!

image credit: channel times

MobileNet has a good feature in terms of scale-able network where you can strike a balance between accuracy and performance. emDNN also has the same scale-able technology. Let’s see how emDNN performs on a network where we took MobileNet-V2–0.75 whose accuracy is 70% and compared it with emDNN technology where we also reduce our accuracy to 70%

So our scalable feature is at par with the expectation beating MobileNet-V2 scalable feature as well!!

One important point worth highlighting is that all these results are on single core of pixel-1 without quantization and pruning. If deep compression is performed all the figure will be better by multiplicative 10X (though we haven't performed deep compression as we want to do apple to apple comparison with MobileNet-V2)

But we all are surrounded with brilliant researcher every where. Who knows we might be pushed back again to #2 position. But till than we can enjoy our single malt but without our guard down. Who knows what will happen tomorrow morning.

If you are interested in AiOTA Labs technology and wants to run this magic emDNN yourself, please write a mail to info@aiotalabs.com. We have integrated emDNN on PyTorch, Caffe and TensorFlow as libraries.

You can also visit our website at www.aiotalabs.com or for any other enquirers do write to info@aiotalabs.com

--

--