Google AI ‘EfficientNets’ Improve CNN Scaling

Synced
SyncedReview
Published in
3 min readJun 5, 2019

In their neverending quest for better accuracy, researchers often scale up their convolutional neural networks (CNNs) by arbitrarily increasing their depth or width or using larger input image resolutions for training and evaluation. These methods however require manual tuning and can still fail to optimize performance. Might there be a more efficient approach to scaling up CNNs to improve accuracy?

Researchers from Google AI say “yes” and have proposed a new model scaling method in their ICML 2019 paper EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. This method uses a simple but effective compound coefficient to scale up CNNs in a more principled manner. It uniformly scales each dimension with a fixed set of scaling coefficients.

To better understand the effect of network scaling, researchers studied the impact of scaling different model dimensions. Results show the best way to improve overall performance is to balance network dimensions such as width, depth, and image resolution against available network resources.

This compound scaling method first performs a grid search to find the relationship between different scaling dimensions of the baseline network under a fixed resource constraint, which determines the scaling coefficient for each dimension. It then applies these coefficients to scale up the baseline network to the desired target model size or computational budget.

Compared to traditional scaling methods, the proposed compound scaling method consistently enhances model accuracy, and can be shown to improve efficiency when scaling up existing models such as MobileNet and ResNet.

Based on these findings, researchers developed a new baseline network by performing a neural architecture search using the AutoML MNAS framework, then scaled up the baseline network to generate a series of models they called “EfficientNets,” which advanced accuracy with up to 10x better efficiency.

Researchers compared EfficientNets with other CNNs on ImageNet. Results show that EfficientNet models achieve both higher accuracy and better efficiency. Moreover, experiments showed that EfficientNets also transfer well on eight widely used transfer learning datasets.

The paper EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks is on arXiv. All the relevant source code and TPU training scripts is open sourced at GitHub.

Author: Yuqing Li | Editor: Michael Sarazen

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

SyncedReview
SyncedReview

Published in SyncedReview

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Synced
Synced

Written by Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global