Self-Adversarial Learning with Comparative Discrimination for Text Generation

Synced
SyncedReview
Published in
3 min readFeb 22, 2020

Content provided by Wangchunshu Zhou, the first author of the paper Batchboost: Regularization for Stabilizing Training With Resistance to Underfitting & Overfitting.

Conventional Generative Adversarial Networks (GANs) for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples. To address the issues, we propose a novel self-adversarial learning (SAL) paradigm for improving GANs’ performance in text generation.

What’s New: A novel comparative discriminator and self-play fashion for training text GANs. In contrast to standard GANs that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier for comparing the text quality between a pair of samples. During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples. This self-improvement reward mechanism allows the model to receive credits more easily and avoid collapsing towards the limited number of real samples, which not only helps alleviate the reward sparsity issue but also reduces the risk of mode collapse. Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity, and yields more stable performance compared to the previous GANs for text generation.

How It Works: It works by training a comparative discriminator to compare the currently generated samples to the previously generated ones and encourage relative improvements.

Key Insights:

  1. Comparative discriminator may works better for training GANs
  2. The self-play fashion may be helpful for training GANs.

Anything else: Applying the proposed approach for training conventional GANs (i.e. for image generation), may be promising.

The paper Self-Adversarial Learning with Comparative Discrimination for Text Generation is on OpenReview.

Meet the authors Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei and Ming Zhou from Beihang University and Microsoft Research Asia.

Share Your Research With Synced

Share My Research is Synced’s new column that welcomes scholars to share their own research breakthroughs with over 1.5M global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas. Share your research with us by clicking here.

Thinking of contributing to Synced Review? Synced’s new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global