Looking back: The real language of chatbots

After the story of two Facebook negotiator-chatbots desperately trying to communicate created a media uproar in July, a bunch of questions about the natural language processing arose.

Una Sometimes
Beluga-team
3 min readOct 8, 2017

--

Image: Courtesy of Pexels

The question we have to ask ourselves is if AI can operate and actually benefit from a language of its own? The answer should be affirmative, even if we still are miles away from debunking all the mysteries of AI natural language learning. With today’s bots still playing in the sandbox and language experiments being more cute than realistic, we have to assess what potential lies behind it and where the future will possibly develop towards.

The point we are trying to make is that maybe we are a little bit too fixated on natural language processing as a means of evolving. But the answer can lie in a different approach to the way languages are created for queries and bot interaction by developing next-gen software.

One such builder of communication bots is Igor Mordatch from OpenAi (the Tesla lab), creates a new generation of conversational bots using reinforcement learning and experimental design in order to develop a shared language. He is not the only one in his endeavor, as right now AI language programming really trends around Silicon Valley. Still, his approach is very unique. His aim is to help his agents invent a simple language which is grounded and compositional. By this, he means a language which is directly linked the speakers environment. Words create image associations. The latter aspect — compositional, means that multiple words can be fitted into a sentence to represent an idea. He is trying to make his agents move to a specific location:

“ The agents exist in a two-dimensional world with simple landmarks, and each agent has a goal. Goals can vary from looking at or moving to a specific location, to encouraging a separate agent to move to a location. Each agent can broadcast messages to the group. Every agent’s reward is the sum of the rewards paid out to all agents, encouraging collaboration.” -Mordatch: Training agents to invent language

Ultimately, these methods can offer a different and deeper grasp about how language is working, by being visual and relying on a collaborative aspect to solve a problem. Mordatch’s research also symbolizes a significant break with the classical tradition of language programming. Other roboticists and language programming experts try to imitate human language and not creating an entirely new way of perceiving it.

Source: Giphy

Deep Neural nets are more often than not the common tech employed to language understanding, by looking at patterns in the English language. The complex mathematical system proved to be super effective when recognizing pictures by patterns in data and such. Yet it proves to be less effective with language learning. Which is why Facebook’s latest endeavor failed so hilariously. Mordatch and his team question the efficacy of deep neural nets when it comes to language and observes that:

“For agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient,” (…) “An agent possesses an understanding of language when it can use language (along with other tools such as non-verbal communication or physical acts) to accomplish goals in its environment.”

Meanwhile, another team of researchers at OpenAI are currently working on a much more complex virtual world, they cryptically call Universe. Teased much the same idea when they unveiled a much larger and more complex virtual world they call Universe. This world takes a reinforcement learning approach, by being an alternative path to language understanding.

Universe presents itself as a digital playground for AI, which was created to school agents in learning to do just about anything. The goal is to teach a computer basically everything a human can do, says OpenAi CTO Greg Brockman.

Meanwhile, even big players such as Microsoft seem to get a knack for reinforcement learning and are trying to create collaborative AI projects. The way seems to be clear. And a different approach seems to have become the solution to an otherwise unsolvable issue. Yet in the end, like with all things human, the way will probably be a mix of different approaches. The important thing is to stop focusing on making a computer speak “human” and rather focus on it learning how to speak its own language.

--

--