Israeli AI Research Company Led by Stanford Professor and MobileEye CEO Introduces Model to Solve Text Ambiguity

Synced
SyncedReview
Published in
6 min readAug 16, 2019

Given a sentence which includes homonyms, such as “He is a great bass player who hates eating bass,” humans are considerably confident that the former “bass” means a musical instrument and the latter a freshwater fish. But AI algorithms are likely to stumble over such simple text ambiguity problems. “Winograd Schemas” can also perplex an AI system. A classic example is “The trophy doesn’t fit in the suitcase because it is too big.” Here, the system struggles to decide whether the pronoun “it” applies to “trophy” or “suitcase.”

In a bid to teach algorithms to better understand the ambiguities of human language, Israeli research company AI21 Labs today published the paper SenseBERT: Driving Some Sense into BERT, which proposes a new model that significantly improves lexical disambiguation abilities and has obtained state-of-the-art results on the complex Word in Context (WiC) language task.

A noteworthy innovation of this paper is that SenseBERT is pretrained to predict not only masked words but also their actual meanings in a given context. To achieve that, researchers designed a network to predict a word’s actual meaning from context, using the English language lexical database WordNet as the labeling reference. This pretrained network was then embedded into BERT (Google’s 2018 “Bidirectional Encoder Representations from Transformers” model).

The SenseBERT paper is one of the first research projects from AI21 Labs, which had operated in stealth mode over the past year and a half. Headquartered in Tel Aviv, Israel, the company was founded in 2017 by respected Stanford University Professor of Computer Science (emeritus) and AI Index initiator Yoav Shoham and Ori Goshen, a former cybersecurity team leader at the Israeli military’s Unit 8200 intelligence corps.

Following in the footsteps of DeepMind in London and OpenAI in San Francisco, AI21 Labs also runs a commercial AI company with a unique research focus — bridging traditional knowledge representation methods and deep neural networks. Knowledge representation was popular back in the 1980s as a machine learning technique for extracting semantic predictions from language.

The marriage of symbolic and neural approaches has been gaining momentum in recent years due the looming limits of neural networks. Despite their significant progress in many areas of AI, neural networks still struggle with semantics. Moreover, even the smartest natural language understanding (NLU) systems can be easily fooled by adversarial data samples in targeted language tasks. In 2017 Stanford researchers discovered that adding adversarial questions to SQuAD (Stanford Question Answering Dataset) lowered the accuracy of 16 published models from an average F1 score of 75 percent to just 36 percent.

Similar issues are also emerging on text-generation systems. Earlier this year OpenAI released the language model GPT-2, which can generate realistic paragraphs of text. Stanford University Associate Professor of Computer Science and Statistics and SQuAD co-creator Percy Liang however noted a caveat: “The [GPT-2] language model can write like a human, but it doesn’t have a clue what it’s saying.”

AI still lacks the commonsense of a 5-year-old. We’re far from capturing human intelligence,” says Dr. Shoham.

Best known for his tremendous contributions in knowledge representation and game theory, Dr. Shoham envisions improved neural networks which are augmented with knowledge representation.

This Friday Dr. Shoham was honoured with a Research Excellence Award at the International Joint Conference on Artificial Intelligence (IJCAI 2019). He told the Macau audience: “I don’t see evidence that neural networks will learn arithmetic, time and space, causation, mental state, speech acts…, on a sub-evolutionary timescale. On the other hand, knowledge representation’s focus has been precisely on codifying these elusive concepts: time, action, belief.”

Yoav Shoham

In 2017 Dr. Shoham met Goshen in an Israeli non-profit project that democratizes programming skills in underprivileged communities. They found each other like-minded and decided to start AI21 Labs (“AI for the 21st century”) with the mission of building AI systems “with an unprecedented capacity to understand and generate natural language.”

Dr. Shoham’s friend, MobileEye CEO Amnon Shashua, is both a major investor and the AI21 Labs Chairman. The company has a team of 20 and has secured total funding of US$9.5 million from Pitango Ventures, 8VC, and others.

MobileEye CEO Amnon Shashua

Announced along with the SenseBERT is “HAIM,” a homegrown text generation system which offers greater controllability compared with other cutting-edge text-generation systems like OpenAI’s GPT-2 and Washington University’s Grover. Most text generators synthesize text given a human-written premise, but often go off-topic, lose coherence, or contradict the original context. HAIM works a bit differently: the model is given a beginning and an ending which it is tasked to bridge with generated text that stays on topic. The output can be adjusted for length.

Appropriately, HAIM came up with its own name. Researchers input the opening phrase “The team needed a name. The best suggestion…” and a corresponding ending phrase “…everybody agreed it was a great name for a state-of-the-art natural language generator.” The model proposed “HAIM” and the researchers reverse-engineered the acronym to get “Halfway Acceptable Interpolating Machine.” (Dr. Shoham calls it “a tongue-in-cheek, post-hoc rationalization.”)

AI21 Labs has released a demo of its HAIM-Large model, a variant of the model with 345 million parameters trained on the 40GB OpenWebText dataset.

Comparing controllability of GPT-2 and HAIM

Dr. Shoham has previously founded AI companies which sold for considerable sums of money. While the 63-year-old serial entrepreneur has unquestionably mastered the skills required to run a company, AI21 Labs poses different challenges — it is more tech-based and will require substantial investments to support research before commercialization (like OpenAI and DeepMind); and most importantly, its mission is to create an system to tackle one of AI’s most vexing problems.

Asked how AI21 Labs plans to proceed in this uncharted territory, Dr. Shoham told Synced “This is a multifaceted issue, and we also probably only partially understand it. To be successful, you don’t need to understand all the ways but understand some ways that are productive. We don’t think we’re the smartest in the world, but we don’t think that anybody else is smarter than us. We have a chance to succeed, and we don’t need to be the only one that succeeds. It’s not a zero-sum game.”

Journalist: Tony Peng | Editor: Michael Sarazen

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global