Is AI Advancing Too Quickly or Too Slowly?
Last month, it was widely reported that two Facebook chatbots had been shut down after communicating with each other in an unrecognizable language. Concerns flooded discussion boards until Facebook explained that the cryptic exchanges had merely resulted from a grammar coding oversight.
The story recalled sci-fi scenarios of out-of-control, superintelligent and self-aware robots scheming against humanity. Some suggested AI might be advancing too quickly. Meanwhile others believe AI is advancing too slowly. So, which is it?
In a way, it’s both. Scientific progress has been impressive with regard to narrow AI (also known as weak AI), a non-sentient artificial intelligence focused on one narrow task. In certain areas, narrow AI has surpassed human ability much faster than expected. For example, researchers estimated it would be more than a decade before a machine could defeat a master at the game of Go — but Google’s AlphaGo overpowered Lee Sedol in 2016. Last month, a robot created by the San-Francisco-based non-profit research institute OpenAI defeated the №1 solo human player in “DOTA2,” one of the most challenging and respected of computer games.
“I think that when we define a clear task like chess or go, it can take some numbers of years to solve it, but it is a well-defined computation problem,” says Oren Etzioni, CEO of the Allen Institute of Artificial Intelligence, a Settle-based non-profit AI research institute started by Microsoft co-founder Paul Allen. “We are good at building fast computers or building algorithms.”
Etzioni refers to a class of machine learning algorithms known as “deep learning,” which deliver outstanding results in narrow tasks such as processing languages, visual representations, and 3D data. Says Jianxiong Xiao, the former director of Princeton Computer Vision & Robotics and founder of self-driving startup AutoX, “Deep learning is similar to the programming language C++. It’s a basic tool that can be understood and employed by every programmer.”
Why are narrow AI and deep learning evolving so quickly? It’s all about the bottom line. In the 80s and 90s, most AI research was government-funded and directed by research divisions at universities. Now, it’s tech companies like Google, Amazon and Baidu who are driving research and developing new AI technologies. Such companies tend to focus on narrow AI because they need better algorithms to improve their products.
But is narrow AI the ultimate intelligent machine we are aiming for? Clearly not. AlphaGo will prevail in any Go competition against humans, but its performance is not transferable to any other tasks. It’s the same with any narrow AI.
This contrasts with general AI (also known as artificial general intelligence or strong AI), defined as a machine that can successfully perform any intellectual task that a human being can — for example, reading a novel and then answering questions about it, like any high school student could. This is where scientific progress on AI is moving slowly.
Says Etzioni, “When the problem involves having a good dialogue, or representing the meaning of the information inside a book, it is very difficult to make a precise problem for computers. When the problem is vague, then we struggle.”
Etzioni does not believe deep learning is the pathway to general AI. Deep learning has a huge handicap, in that it requires massive amounts of data and computation and can only solve a narrow task.
Czech-based GoodAI is a company trying to develop general AI without incorporating narrow AI. Founder and CEO Marek Rosa leads a team directing research on teaching AI gradual learning techniques, so that instead of being coded for a specific task, their AI would learn skills independently. Rosa believes narrow AI designs will not help in pursuit of the goal.
Building a general AI is a long-term and challenging process. Simon Andersson, Senior Research Scientist at GoodAI, lists 29 unsolved problems hindering the development of general AI. The challenges are grouped into AI-complete problems, closed-domain problems, and fundamental problems in commonsense reasoning, learning, and sensorimotor ability.
While humans have made astonishing advances with some types of AI, this does not mean we will achieve general AI any time soon. As Facebook’s Director of AI Research Yann Lecun says, “So far, we have seen only 5% of what AI can do.”
In the movie “Terminator Genisys,” the fictional general AI entity SkyNet becomes self-aware in 2017, creating an existential threat for humanity. Meanwhile, in the real 2017, the scariest thing we’ve created is a couple of chatbots with bad grammar.
Journalist: Tony Peng | Editor: Michael Sarazen