AutoML Mobile: Automated ML Model Design for Every Mobile Device
Designing accurate and efficient CNNs for mobile devices is challenging due to the large design space and expensive computational methods. Although many mobile CNNs are available for developers to train and deploy to mobile devices, existing CNN architecture may not be able to achieve the best results for some tasks on mobile devices.
Last year, Google introduced an automated mobile neural architecture search (MNAS) approach, and proposed MnasNet based on reinforcement learning to automatically design mobile models. Facebook then proposed FBNet, a differentiable neural architecture search (DNAS) framework to optimize CNN architecture based on a gradient method. Both FBNet and MnasNet introduced automated solutions to change the way deep learning models are designed for mobile. AutoML is also becoming increasingly important in the development of deep learning models.
Synced invited Mi Zhang, an Assistant Professor focused on the intersection of computer systems and machine intelligence at Michigan State University where he directs the Systems for Machine Intelligence (SysML) Lab, to share his thoughts on Mobile AutoML.
How would you describe Facebook’s FBNet and Google’s MnasNet?
Facebook FBNet and Google MnasNet are state-of-the-art neural architecture search approaches for automatically identifying hardware-aware efficient CNN models that achieve an optimal balance between inference accuracy and latency on mobile devices.
Why does this research matter?
The recent past has witnessed deep learning (DL) becoming an indispensable component of artificial intelligence (AI), attributed to its capability of achieving near-human accuracies on a variety of important AI tasks such as face recognition, image classification, and object detection. DL models, however, are computationally expensive. Designing DL models for resource-constrained platforms such as mobile devices is never a trivial task. This is because mobile DL models need to achieve the expected high accuracy while being resource efficient.
The key to designing state-of-the-art mobile DL models is identifying the network architecture that achieves the optimal trade-off between accuracy and resource efficiency. However, considering the significantly large number of possible architectures in the design space, manually exploring such large design spaces to balance the trade-off based on human heuristics is extremely challenging, time-consuming, and often leads to sub-optimal architecture. Moreover, since different types of mobile devices have different hardware characteristics, the same network architecture can exhibit different resource efficiency on different mobile devices. As a consequence, each mobile device may require a different network architecture to achieve the optimal accuracy-resource efficiency trade-off. Unfortunately, given its staggering cost, manually identifying the optimal network architecture for every single mobile device is practically infeasible.
To address the drawbacks of the manual approach, Google and Facebook recently published their automated machine learning (AutoML) frameworks MnasNet and FBNet, which are able to automatically identify a network architecture that achieves the optimal balance between accuracy and resource efficiency for mobile devices. Both MnasNet and FBNet are designed based on state-of-the-art neural architecture search (NAS) technology. Different from previous NAS strategies that solely focus on finding network architecture that achieves the highest accuracy, MnasNet and FBNet explicitly incorporate resource efficiency information into the objective of NAS strategy so that the automated architecture search process can identify the network architecture that optimally balances the trade-off between accuracy and resource efficiency. More importantly, both MnasNet and FBNet take the specific hardware characteristics of mobile devices into consideration and thus are able to customize the optimal network architecture for each mobile device. As a result, the automatically generated mobile DL models consistently outperform state-of-the-art manually designed models in terms of accuracy and resource efficiency across various AI tasks.
What impact might these neural architecture search approaches bring to the research community?
MnasNet and FBNet introduce an automated approach to design accurate and efficient DL models for mobile devices. Conventionally, designing such high-quality mobile DL models requires developers to have descent machine learning expertise to manually tune model parameters and refine network architecture. The state-of-the-art AutoML approaches introduced in MnasNet and FBNet allow developers with limited machine learning expertise to develop high-quality models specific to their needs in an automated manner.
MnasNet and FBNet also introduce a scalable approach to customize mobile DL models to different mobile devices. Considering the cost of a manual approach, developers can only afford to design one model for all types of mobile devices. The platform-aware approaches introduced in MnasNet and FBNet provide an efficient way to allow developers to create customized mobile DL models which achieve optimal runtime performance on target mobile devices.
Can you predict any potential future developments related to the research?
AutoML is revolutionizing how DL models are designed. Given its increasing importance in the workflow of DL model development, AutoML is becoming the new battlefield of giant AI companies such as Google and Facebook. Since many of their AI powered services are delivered through resource-constrained mobile devices, the key to winning this battle is a mobile AutoML framework that is able to automatically design optimized mobile DL models customized for every single mobile device.
The papers FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search and MnasNet: Platform-Aware Neural Architecture Search for Mobile are on arXiv.
About Prof. Mi Zhang
Mi Zhang is an Assistant Professor of Electrical and Computer Engineering and Computer Science and Engineering at Michigan State University, where he directs the Systems for Machine Intelligence (SysML) Lab. Prof. Zhang received his PhD from the University of Southern California and BS from Peking University. His research lies at the intersection of computer systems and machine intelligence, spanning areas including mobile/edge computing, deep learning systems, distributed systems, Internet of Things, and mobile health. His work has been reported on by leading national and international media such as MIT Technology Review, WIRED, TechCrunch, New Scientist, TIME, CNN, ABC, NPR, The Washington Post, Smithsonian Magazine, and The Wall Street Journal.
Synced Insight Partner Program
The Synced Insight Partner Program is an invitation-only program that brings together influential organizations, companies, academic experts and industry leaders to share professional experiences and insights through interviews and public speaking engagements, etc. Synced invites all industry experts, professionals, analysts, and others working in AI technologies and machine learning to participate.
Simply Apply for the Synced Insight Partner Program and let us know about yourself and your focus in AI. We will give you a response once your application is approved.
2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.
Follow us on Twitter @Synced_Global for daily AI news!
We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.