Review — CoGAN: Coupled Generative Adversarial Networks (GAN)

With Weight Sharing, Generates Correlated Outputs in Different Domains for the Same Input, Outperforms CGAN

Sik-Ho Tsang
Mar 28 · 6 min read
Face Generation With and Without Smiling

In this story, Coupled Generative Adversarial Networks, (CoGAN), by Mitsubishi Electric Research Labs (MERL), is reviewed.

The paper concerns the problem of learning a joint distribution of multi-domain images from data.

In this paper:

This is a paper in 2016 NIPS with over 1100 citations. (Sik-Ho Tsang @ Medium)


1. Coupled Generative Adversarial Network (CoGAN)

Coupled Generative Adversarial Network (CoGAN)

With weight sharing, a trained CoGAN can be used to synthesize pairs of corresponding images — pairs of images sharing the same high-level abstraction but having different low-level realizations.

1.1. Generators

The idea is to force the first layers of g1 and g2 to have identical structure and share the weights.

With weight sharing, the pair of images can share the same high-level abstraction but having different low-level realizations.

1.2. Discriminators

But it is later found out that it does not help much on the quality of the synthesized images. But still, the weight sharing is used.

This is because the weight-sharing constraint in the discriminators helps reduce the total number of parameters in the network, though it is not essential for learning a joint distribution.

1.3. Learning

Basically, the alternating gradient update steps are to train 2 discriminators one by one, then to train 2 generators one by one alternatively.

Network architecture for digit generation
Network architecture for face generation

2. Experimental Results

2.1. Digit Generation

Left: Edge MNIST, Right: Negative MNIST

It is found that the performance was positively correlated with the number of weight-sharing layers in the generative models but was uncorrelated to the number of weight-sharing layers in the discriminative models.

2.2. Face Generation

Generation of face images with different attributes using CoGAN.

As traveling in the space, the faces gradually change from one person to another. Such deformations were consistent for both domains.

Note that it is difficult to create a dataset with corresponding images for some attribute such as blond hair since the subjects have to color their hair.

2.3. Color and Depth Images Generation

Generation of color and depth images using CoGAN.

The CoGAN recovered the appearance–depth correspondence unsupervisedly.

2.4. Potential Applications

Unsupervised domain adaptation performance comparison.
Cross-domain image transformation.

Later on, authors extend CoGAN to have Image-to-image translation, and it is published in 2017 NIPS. Hope I can review it later in the coming future.


[2016 NIPS] [CoGAN]
Coupled Generative Adversarial Networks

Generative Adversarial Network (GAN)

Image Synthesis [GAN] [CGAN] [LAPGAN] [DCGAN] [CoGAN]
Image-to-image Translation [Pix2Pix]
Super Resolution [SRGAN & SRResNet] [EnhanceNet] [ESRGAN]
Blur Detection [DMENet]
Camera Tampering Detection [Mantini’s VISAPP’19]
Video Coding
[VC-LAPGAN] [Zhu TMM’20] [Zhong ELECGJ’21]

My Other Previous Paper Readings


Everything connected with Tech & Code

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store