ACL 2018 Announces Its Five Best Papers

Synced
SyncedReview
Published in
3 min readJun 11, 2018

The Association for Computational Linguistics (ACL) will hold its 56th Annual Meeting July 15–20 in Melbourne, Australia. Yesterday, the ACL 2018 organising committee announced its three best long papers and two best short papers.

From a total of 1544 submitted papers, the ACL accepted 258 long papers out of 1018 submissions, and 126 short papers from 526 submissions. This year’s overall acceptance rate is 24.9 percent.

Taking top honours in the long papers category are Finding Syntax in Human Encephalography With Beam Search, from a research group lead by John Hale; Sudha Rao and Hal Daumé III of the University of Maryland’s Learning to Ask Good Questions: Ranking Clarification Questions using Neural Expected Value of Perfect Information; and the McGill University and MILA research group’s Let’s Do It “again”: A First Computational Approach to Detecting Adverbial Presupposition Triggers. The Hale paper is as yet unpublished, while the others have already appeared on arXiv.

ACL’s choices for the best two short papers are, from a Stanford University research group including Pranav Rajpurkar, Robin Jia and Percy Liang, Know What You Don’t Know: Unanswerable Questions for SQuAD; and Olivia Winn and Smaranda Muresan of Columbia University’s ‘Lighter’ Can Still Be Dark: Modeling Comparative Color Descriptions. Neither of these papers have been published.

Below are the abstracts of the two published papers.

https://www.cs.mcgill.ca/~jkabba/acl2018paper.pdf

We introduce the task of predicting adverbial presupposition triggers such as also and again. Solving such a task requires detecting recurring or similar events in the discourse context, and has applications in natural language generation tasks such as summarization and dialogue systems. We create two new datasets for the task, derived from the Penn Treebank and the Annotated English Gigaword corpora, as well as a novel attention mechanism tailored to this task. Our attention mechanism augments a baseline recurrent neural network without the need for additional trainable parameters, minimizing the added computational cost of our mechanism. We demonstrate that our model statistically outperforms a number of baselines, including an LSTM-based language model.

https://arxiv.org/pdf/1805.04655.pdf

Inquiry is fundamental to communication, and machines cannot effectively collaborate with humans unless they can ask questions. In this work, we build a neural network model for the task of ranking clarification questions. Our model is inspired by the idea of expected value of perfect information: a good question is one whose expected answer will be useful. We study this problem using data from StackExchange, a plentiful online resource in which people routinely ask clarifying questions to posts so that they can better offer assistance to the original poster. We create a dataset of clarification questions consisting of ∼77K posts paired with a clarification question (and answer) from three domains of StackExchange: askubuntu, unix and superuser. We evaluate our model on 500 samples of this dataset against expert human judgments and demonstrate significant improvements over controlled baselines.

Author: Victor Lu | Editors: Michael Sarazen, Tony Peng

Subscribe here to get insightful tech news, reviews and analysis!

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global