Retrieving Reasoning Paths for Answering Complex Questions

Synced
SyncedReview
Published in
4 min readFeb 26, 2020

Question Answering (QA) is a task where a model is asked to answers questions in natural language given a number of source text documents. It is particularly challenging for machines to answer multi-hop open-domain questions, as they must collect multiple pieces of evidence scattered across multiple documents. This is difficult to do using common term-based retriever methods as the documents may have little lexical overlap or semantic relationship to the original question.

The most common approach for open-domain QA is to use non-parameterized models (such as TF-IDF or BM25) to retrieve a fixed set of documents, often from an open source such as Wikipedia. The answer range is then extracted by a neural reading comprehension model. Although these pipeline methods have been successful in single-hop QA, they cannot retrieve the essential evidence that is needed to answer multi-hop questions. Also, independently searching a fixed list of documents does not capture the relationship between the evidence documents through the bridge entities required for multi-hop inference.

In an attempt to equip the TF-IDF-based retriever with a state-of-the-art neural reading comprehension model (the most current open-domain QA approach), researchers from the University of Washington, Salesforce Research and the Allen Institute for Artificial Intelligence recently introduced a new graph-based recurrent retrieval approach. The trainable framework retrieves reasoning paths in paragraphs by formulating the task as a neural path search over a massive-scale Wikipedia document graph constructed from the relevant raw Wikipedia articles and their internal links.

Given the history of previously retrieved documents, the graph-based recurrent retriever sequentially retrieves each evidence document to form multiple inference paths in the entity graph. The graph-based recurrent retriever built on top of the existing reading comprehension model can then answer questions by ranking the retrieved reasoning paths. The powerful interaction between the retriever model and the reader model enables the entire method to answer complex questions by exploring more accurate inference paths compare to other methods.

Overview of the graph-based retriever-reader framework

The retriever was trained using annotated evidence paragraphs in a supervised manner with a “negative sampling + data augmentation and inference-time decoding” strategy. Multiple paragraphs and a single paragraph were provided for multi-hop QA questions and single-hop QA questions respectively. A ground-truth reasoning path was derived using the available annotated data in each dataset. To relax and stabilize the training process, researchers augmented the training data with additional reasoning paths which can derive the answer.

HotpotQA development set results

The researchers evaluated their graph-based recurrent retriever using the open-domain Wikipedia-sourced datasets HotpotQA, SQuAD Open and Natural Questions Open — with the new approach significantly outperforming all previous SOTA methods.

The paper Learning to Retrieve Reasoning Paths Over Wikipedia Graph for Question Answering is on arXiv.

Author: Xuehan Wang | Editor: Michael Sarazen

Thinking of contributing to Synced Review? Synced’s new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global