Facebook AI RegNet Models Outperform EfficientNet Models, Run 5x Faster on GPUs

Synced
SyncedReview
Published in
3 min readApr 3, 2020

In the recently published paper Designing Network Design Spaces, researchers from Facebook AI introduce a novel low-dimensional design space, RegNet, which produces simple, fast and versatile networks. In experiments, RegNet models outperform SOTA EfficientNet models and can be up to five times faster on GPUs.

The researchers’ intentions were straightforward: “Aim for interpretability and to discover general design principles that describe networks that are simple, work well, and generalize across settings.” Rather than designing and developing individual networks, the team focused on designing actual network design spaces comprising huge and possibly infinite populations of model architectures.

Manual network design typically considers convolution, network and data size, depth, residuals, etc. However, with increasing design choices, manually identifying optimized networks is no easy or efficient task. While Neural architecture search (NAS) is a popular approach, the models it finds can be limited by search space settings. Moreover, NAS does not necessarily help researchers discover network design principles or generalize networks.

So, how to design the best network design space? The Facebook AI team describes their approach as “akin to manual network design, but elevated to the population level.”

Researchers start with an initial design space as input, and gather model distributions via sampling and training. Design space quality is analyzed using error empirical distribution function (EDF). Various properties of the design space are visualized, and after an empirical bootstrap method predicts the likely range where the best models might fall, researchers use these insights to refine the design space.

The Facebook AI team conducted controlled comparisons with EfficientNet with no training-time enhancements and under the same training setup. Introduced in 2019, Google’s EfficientNet uses a combination of NAS and model scaling rules and represents the current SOTA. With comparable training settings and Flops, RegNet models outperformed EfficientNet models while being up to 5× faster on GPUs.

Analyzing the RegNet design space also provided researchers other unexpected insights into network design. They noticed, for example, that the depth of the best models is stable across compute regimes with an optimal depth of 20 blocks (60 layers). While it is common to see modern mobile networks employ inverted bottlenecks, researchers noticed that using inverted bottlenecks degrades performance. The best models do not use either a bottleneck or an inverted bottleneck.

The paper Designing Network Design Spaces is on arXiv.

Journalist: Fangyu Cai | Editor: Michael Sarazen

Thinking of contributing to Synced Review? Synced’s new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global