Teaching computers how humans think by creating questions that computers cannot answer!

Vivek Nallur
Inscripta AI
Published in
3 min readAug 31, 2019

By: Team Inscripta

From time to time, Team Inscripta will write about papers in the field of conversational AI, that we think will advance the state-of-the-art considerably. This is the first in that series.

Almost everyone has asked Google an actual question (“What are the warning signs of a stroke?” “How do I register to vote?”, etc.) and found that it returns a fairly sensible answer. So, how do computers do this? Do they really understand what we are asking? Or are they using some sophisticated algorithms that produce answers which seem to be good enough? The answer to the second question is easy: Yes. But the answer to the first question is a bit more difficult: what does it mean to understand? We know how computers arrive at answers because we write the algorithms that find answers in a sea of text. How humans arrive at answers is less thoroughly understood.

At a very basic level, computers try to identify which words are related by using statistical patterns in their co-occurrence. These words are then matched with an ontology to figure out what things/beings/entities are being talked about, and what the intent of the question is. Using large amounts of text, say using Wikipedia or some similar textual corpus, the computer tries to figure out which sentences/information could form possible answers. Using various techniques, it then ranks the possible answers by the amount of confidence it has, that the answer is what the user is looking for. It then outputs the answer with the highest confidence value. Humans do not necessarily think in the same way or try to find answers in the same way. For example, in a quiz event, a possible question could be — what is the connection between peanuts and Jimi Hendrix? Most human beings would instantly understand that the question is not talking about nuts such as peanuts/groundnuts/walnuts, rather it is possibly the comic strip Peanuts, by Schulz.

This ability to take a word and move away from its word-meaning to its cultural meaning is quite unique to humans. Language understanding is difficult to algorithmize, and there have been many approaches to making computers understand Natural Language. In 2011, when IBM’s Watson won Jeopardy, it seemed as if Natural Language as a problem had been solved. However, even now when we interact with conversational agents such as Siri, Alexa or chatbots on several websites, it is painfully clear that AI, even after using sophisticated machine learning techniques with humongous textual data, is still quite a long way from truly understanding human language.

A new attempt by researchers at the University of Maryland takes a different approach. Instead of teaching machines how to read text and understand human intent, it tries to get human beings to create a dataset of questions that will definitely stump a computer. Normally, in quizzes, computers are at an advantage because they can search through huge volumes of text without forgetting. However, when human beings create questions that other human beings find easy, but machines find difficult to answer, it brings into sharp focus how human beings think. For example, the phrase “down the rabbit hole” will be understood by many humans not only as a reference to Alice in Wonderland, but also to a (possibly confusing) voyage of discovery. The researchers report on how human beings create difficult-for-machines questions when they can see how the computer is searching through concepts and words. Curated after multiple rounds of quizzes by multiple teams, this new dataset of questions form a sort of a whetstone that can be used by other researchers to create better conversational AI.

Link to details of the paper: https://cmns.umd.edu/news-events/features/4470

If you would rather watch a video of the researchers explaining their method, here’s a link to youtube playlist: https://www.youtube.com/playlist?list=PLegWUnz91WfsBdgqm4wrwdgtPV-QsndlO

By the way, Snoopy’s best friend in the comic Peanuts is called Woodstock. And Jimi Hendrix played at Woodstock. But you knew that already, didn’t you?

--

--