Algorithms can beat humans at reading comprehension, but they still don’t understand language

True comprehension requires meaning, not just pattern recognition

Mind AI
Mind AI
4 min readDec 5, 2018

--

The ability to perform natural language reasoning will distinguish the Mind AI reasoning engine.

By John Doe, Chief Scientist at Mind AI

One of the core features of the Mind AI reasoning engine — and we’re getting closer to releasing a public demo of it soon—is its ability to perform natural language reasoning.

If you follow the latest developments in AI from industry leaders in Silicon Valley, you are probably more familiar with the term “natural language processing.”

Natural language processing is a subfield of artificial intelligence research focused on training computers to manipulate human language. Most approaches build on the latest advances in machine learning and neural networks.

As modern computing continues to provide greater processing power, scientists are able to fine-tune the algorithms that search for patterns in massive repositories of human language until they are able to pass a variety of tests designed to measure a machine’s ability to process and manipulate human language. These tests evaluate reading comprehension and the ability to finish a sentence in a logical way.

Here are a few of the recent advances that have been in the news:

— Salesforce debuted a text summarization tool that “is dramatically better than anything developed previously, according to a common software tool for measuring the accuracy of text summaries,” according the MIT Technology Review. Salesforce’s tool uses machine learning to pull snippets of sentences in a longer text and piece them together into a summary. Its results are far from perfect — but reviewers saw the tool as an indication of the bright future ahead for AI-based text summarization tools.

— Alibaba and Microsoft both have AI that can beat out a human in a reading-comprehension test developed by Stanford.

— Google has developed a tool called BERT that can guess missing words anywhere in a sentence, and can understand the relationships between many words in a sentence. BERT was trained on thousands of self-published books, including romance novels, science fiction, and the entire Wikipedia database.

These advances are all impressive, and could all be put to use in applications that help humans handle more and more information. But in his article about the Salesforce tool, MIT Technology Review reporter Will Knight points out, “Summarizing text perfectly would require genuine intelligence, including common sense knowledge and a mastery of language.”

We couldn’t agree more. And we also believe that as long as natural language processing is developed with machine-learning algorithms and neural networks, it will never achieve a mastery of language.

The key element missing from all of these advances in natural language processing is that the algorithms have no concept of what they are doing. Their applications are still narrow.

Salesforce’s text summarization tool would struggle to relate the summaries of two different articles, showing what the information in one could mean in the context of the other. Google, Alibaba, and Microsoft’s reading comprehension achievements are impressive, but the scientists developing them can’t pinpoint what makes them succeed or fail, because machine-learning algorithms provide no transparency into their reasoning process.

Our approach rejects the statistical modeling approach, and instead focuses on language comprehension and reasoning as the foundation. We work off of meanings, instead of patterns. We are building an engine that acts like human reasoning, which means we can trace its processes and immediately spot errors in its logic and holes in its understanding.

Our early tests of our engine proved that it has the ability to immediately reason with new information. It does not need to process thousands of human sentences before it can understand one sentence. It just needs to be educated on how to understand the meanings of the words in one sentence, and then it can interpret it. It would then retain the knowledge of the words in the first sentence, which it could apply to a second sentence that used some of those words.

We look forward to letting our engine flex its comprehension skills on the same tests that these notable predecessors have passed. We expect to achieve better results in these narrow applications, but we are most excited to test our engine’s abilities in broad applications, as its knowledge in one domain builds on its knowledge in another.

To stay up-to-date on our progress, sign up for our email list, talk to us on Telegram, or follow us here on Medium.

Read More about Mind AI:

--

--