Face detection, eye tracking, gesture recognition, voice and text recognition — nowadays compute-heavy AI-powered features can be used on almost any mobile device, thanks in large part to advancements in multi-core processors and increasingly powerful GPUs, DSPs and NPUs.
Although many SOTA deep learning models initially were not optimized for mobile inference, that has radically changed over the past few years. In 2015, TensorFlow Mobile came out as the first official library to run standard AI models on mobile devices without needing special modification or conversion. In 2017 TensorFlow Lite (TFLite) offered a significantly reduced binary size and kernels optimized for on-device inference.
Rapid developments in deep learning meanwhile brought a number of new approaches and models that laid a foundation for improving performance on tasks such as image classification, photo processing, and neural language understanding.
In an effort to benchmark deep learning on smartphones in the Android ecosystem, researchers from the ETH Zurich (Swiss Federal Institute of Technology in Zurich) Computer Vision Lab last year developed an AI Benchmark application to measure the AI readiness of more than 200 Android devices and 100 mobile SoCs collected in the wild.
The ETH Zurich researchers recently published their benchmark results for 2019. They partnered with researchers from Google Research, Samsung, Huawei, Qualcomm, MediaTek, and Unisoc to evaluate the performance of all chipsets that are currently providing hardware acceleration for AI inference.
The researchers’ AI Benchmark application comprises 21 deep learning tests measuring more than 50 aspects of AI performance (speed, accuracy, initialization time, stability, etc.) with the most common deep learning architectures on smartphones.
The scores generally reflect AI computation speed, outcome accuracy, and maximum processable image or data size, explained ETH application developer and first author on the paper Andrey Ignatov in an email to Synced.
Huawei took 6 of the top 10 spots for AI-ready smartphones, with the Mate 30 Pro 5G and Mate 30 Pro nearly doubling the scores of other top 10 finishers. “Right now, Huawei devices with the Kirin 990 5G SoC can run floating-point neural networks up to four times faster than phones with other chipsets, thus they are getting a significantly higher total AI score,” Ignatov said.
For users, what do the scores actually mean when it comes to their everyday use?
Ignatov says higher scores indicate two main advantages. “The phone can run standard deep learning models faster, which means that the user will wait less to see the results of the corresponding AI programs.” This applies to faster text translation, face recognition, HDR image processing, augmented reality rendering, etc.
Secondly and more importantly, Ignatov says high scores also make it possible to deploy “more complex and powerful neural networks on the smartphone, which leads to a better user experience.” This means more accurate voice recognition, better photo and video quality, smarter virtual assistants, more secure face unlock systems, and so on.
The researchers plan to continue publishing regular benchmark reports of the actual state of AI acceleration on mobile devices, reflecting changes in the machine learning field and the corresponding adjustments made in the benchmark. The latest results obtained with the AI Benchmark and the description of the actual tests is updated monthly on the project website.
The paper AI Benchmark: All About Deep Learning on Smartphones in 2019 is on arXiv.
Journalist: Yuan Yuan | Editor: Michael Sarazen
We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.
Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!
2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.