An Adversarial Review of “Adversarial Generation of Natural Language”
Yoav Goldberg
1.6K31

Unfortunately this has to be targeted to NLP community reviewers as well. Similar papers have already managed to make it to top NLP conferences.

The paper from Google @ACL 2016 “WIKIREADING: A Novel Large-scale Language Understanding Task over Wikipedia”, disregards almost 10 years of work on Relation Extraction and distant supervision relation extraction while creating their task. They also don’t compare to any of the previous work by slightly changing the setup to be extractive property filling (Or information retrieval) from text, for the sake to be have a training setup that can be easily used to build end-to-end differential models on.

Later on they find that Wikipedia articles are too long so they train their models for only the first 300 words in each Wikipedia Article.

A good example also is after a more than a year from publishing the NIPS paper from 2015 “Teaching Machines to Read and Comprehend” , and almost more than 100 citations. Danqui chen in her ACL 2016 paper “A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task” creates a baseline using linguistic features that surpasses almost all the 100 work that already cited the paper. I expected this baseline to exist in the original paper !

Like what you read? Give Hady Elsahar a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.