SyncedReview
Published in

SyncedReview

New Compression Method Enables Conditional GANs on Edge Devices

A research team from MIT, Adobe Research, and Shanghai Jiao Tong University have introduced a novel method for reducing the cost and size of Conditional GAN generators.

Generative Adversarial Networks (GAN) excel at synthesizing photorealistic images. Conditional GANS, or cGANs, provide more controllable image synthesis and enable many computer vision and graphics applications, for example motion transfer of a dance video to a different person, creating VR facial animations for remote social interaction, etc.

The problem is, cGANs are notoriously computationally intensive, and this prevents them from being deployed on edge devices like mobile phones, tablets or VR headsets with insufficient hardware resources, memory or power.

GAN Compression, the general-purpose compression method the team presents in their paper, has proven effective across different supervision settings (paired and unpaired), model architectures, and learning methods (e.g. pix2pix, GauGAN, CycleGAN). Experiments have demonstrated that without losing image quality, the method reduces CycleGAN computation by more than 20 times and GauGAN computation by about 9 times.

Song Han, an MIT EECS assistant professor whose research focuses on efficient deep learning computing, led the team proposing the new compression framework for reducing inference time and model size of cGAN generators.

The researchers deployed their compressed pix2pix model on a mobile device (Jetson Nano). In a demonstration on the MIT HAN Lab YouTube channel the team compares their model with the original-sized pix2pix on an interactive edges2shoes application.

Quantitative evaluation of GAN Compression: The method can compress SOTA conditional GANs by 9 to 21 times in MACs and 5 to 33 times in model size, with only minor performance degradation.

The researchers identify two factors that make compressing conditional generative models for interactive applications difficult: the unstable training dynamic of GANs by nature, and the large architectural differences between their recognition and the generative models.

To address these challenges the researchers first applied knowledge distillation to transfer knowledge from the intermediate representations of the original teacher generator to corresponding layers of its compressed student generator. They also noted that creating pseudo pairs using the teacher model’s output was helpful for unpaired training.

The team used neural architecture search (NAS) to automatically find an efficient network with significantly fewer computation costs and parameters, then decoupled the model training from architecture search by training a “once-for-all network” that contains all possible channel number configurations.

GAN Compression framework

Researchers applied their framework to unpaired image-to-image translation model CycleGAN; Pix2pix, a conditional-GAN based paired image to-image translation model; and the SOTA paired image-to-image model GauGAN. It was able to compress successfully across model architectures, learning algorithms and supervision settings (paired or unpaired), while preserving image quality.

The authors say future work will include reducing the latency of models and finding efficient architectures for generative video models.

The paper GAN Compression: Efficient Architectures for Interactive Conditional GANs is on arXiv.

Journalist: Yuan Yuan | Editor: Michael Sarazen

To highlight the contributions of women in the AI industry, Synced introduces the Women in AI special project this month and invites female researchers from the field to share their recent research works and the stories behind the idea. Join our conversation by clicking here.

Thinking of contributing to Synced Review? Synced’s new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

--

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Recommended from Medium

2020 AI Residency Guide

Meet Learning Factory

THE LAST GAME FRONTIER : Why AlphaGo’s 2015 defeat of Lee Sedol mattered

On the Third Hand…

Improving Customer Experience In Insurance With AI

A list of artificial intelligence tools you can use today — for personal use (1/3)

What is Lobe and how is Microsoft Trying to Make AI mainstream?

Decentralized Artificial Intelligence and Autonomous Bots (Auto-Bots) in Distributed…

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Synced

Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global

More from Medium

Yoshua Bengio Team Challenges the Task-Diversity Paradigm in Meta-Learning

Inside Meta’s New Architecture for Build AI Agents that Can Reason Like Humans and Animals

Implementing a Transformer From Scratch

Google Trains a 540B Parameter Language Model With Pathways, Achieving ‘Breakthrough Performance’