Is AI About to Outpace Human Intelligence?
Find out if you will witness human intelligence AI in your lifetime and what Elon Musk thinks about Artificial Super Intelligence
Considering the public awareness of artificial intelligence and the speed new breathtaking progress is taking place, it seems to be just a matter of time when AI will surpass the human intelligence level. And yes, the headlines AI is writing are stunning! While the victory of the IBM chess computer Deep Blue over the former chess world champion Garry Kasparov in 1996 (and again in 1997) was something like the eighth wonder of the world, the victory of Googles AlphaGo over Lee Sedol in 2016 in the strategy board game “Go” was seen as predictable for many. AI development has undergone a vast acceleration during the last decade. Assuming a stable growth rate of AI development: Is AI supposed to surpass the human intelligence level over the next few years?
Among the reasons that create the impression AI would be close to human intelligence is the appearance of particular systems mimicking human behavior. Taking a factual view of the state of the art, there is no system that fulfills the expectations of human behavior today. When Microsoft started the chat bot profile Tay (that later became unintentionally famous) on Twitter in March 2016, it went from philanthropist to racist within one single day. To not cause any further damage, Microsoft decided to switch off Tay after only 16 hours. A year later: Facebook started two chat bots named Bob and Alice, who were supposed to learn how to conduct negotiations, but over time they used to communicate in a secret language, not necessarily understandable to humans. After even their developers were no longer able to decrypt their dialogues like „I I can I I I everything else.” — „Balls have a ball to me to me to me to me to me to me to me“, Facebook pulled the plug.
To ordinary people these incidents are the ultimate proof AI withstands controllability and has a tendency to conspire against humanity. But the problem is not a lack of technical knowledge in societies — it’s more the way AI systems are advertised. Actually, everything that is promoted as AI today is in fact Artificial Narrow Intelligence (ANI): The ability to mimic human behavior on a narrow range with respect to all faces of human intelligence. When considering AI on human intelligence level, we refer to Artificial General Intelligence (AGI). The difference appears to be a nuance, but technically it’s a major one!
The success of ANI systems can be traced back to a large extent to the rising availability of big data and computational power as enablers for the application of Deep Learning systems. What comes in the robe of AI is actually a mathematical substitute model that pushes regression to perfection. The secret of a Deep Learning system is not the understanding of its environment and to decide what to do, based on this understanding, but the ability to transform selected environmental features to selected output features. The secret is the handling of non-linearity.
Keeping that in mind it’s easy to understand Tay’s unfortunate transformation to become a racist bot as an adoption of the human bias. Knowing that this Twitter bot was a Microsoft experiment, people intentionally peppered Tay with stupidity and ignorance. And the Facebook bots? How and why did Bob and Alice model their secret language? In Natural Language Processing (NLP) it turned out to be more effective to learn language and words as a representation and not to learn a highly overfitted dictionary. To translate a word from one language to another, most NLP systems will learn an inner representation of an expression. Regarding our bots Bob and Alice, Facebook developers just forgot to code the English language as an exclusive translation for outputting these representations.
Expert estimations diverge
Does that mean we are as far away from AGI as 60 years ago, when scientists started developing artificial neural networks? That’s where the opinions of scientists, entrepreneurs and other experts differ heavily! Recently the futurist and author Martin Ford published a book with a series of interviews with 24 of the most advanced AI scientists and entrepreneurs. Most interviews involved the question for an estimation when AI will achieve human intelligence. The result is a rather pessimistic estimation.
The pessimistic experts argue while there’s sweeping success in the application of Deep Neural Networks, the paths towards AGI have been rarely walked on. They emphasize the missing ability to generalize from few examples, to transfer skills among different domains and to derive causal relationship from environmental observations, namely from unsupervised learning. The representatives of the optimistic section on the other hand argue that AI progress is exponential. Supposing a direct link between ANI and AGI, the progress towards AGI is underway. Whereas the pessimistic experts assume new systems and architectures besides supervised Deep Learning to be required to stride ahead, the optimists evaluate that Deep Neural Networks are the key towards achieving AGI — after all, human intelligence is based on neural networks, too.
“I think the rise of Deep Learning was unfortunately coupled with false hopes and dreams of a sure path to achieving AGI, and I think that resetting everyone’s expectations about that would be very helpful.”
— Andrew Ng, 2018
I personally think neural networks will play a major role on the path towards AGI as a low-level component. But to branch off a targeting AGI development from the ANI development, a serious effort is required to develop architectures that aim to solve the aforementioned problems. The progress of ANI during the last 10 years focused mainly on gathering quick wins. To move forward to AGI, there will probably be a lot of developments without any short- and mid-term return on investment, what raises the entry barrier to invest in AGI development.
Dangers and risks of an intelligence beyond AGI
Therefore it’s extremely important to democratize the development of AI. One step into the right direction was made with the founding of OpenAI as a platform to develop a friendly AI to benefit humanity as a whole. One of the co-founders of OpenAI is Elon Musk. His participation becomes understandable when considering Musk’s statement in 2014 that AI is “the biggest existential threat”. The question if a neutrally designed AI system is likely to harm mankind is almost part of a philosophical discourse. The greater risk of an AI to be an existential threat to humanity seems rather to be a human itself with an evil intent setting up a hostile AI. The more important it is to integrate as many parts and groups of the society into the progress of AI. The failure tolerance is low as MIT professor Max Tegmark emphasizes:
“When we got fire and messed up with it, we invented the fire extinguisher. When we got cars and messed up, we invented the seat belt, airbag, and traffic light. But with nuclear weapons and AI, we don’t want to learn from our mistakes. We want to plan ahead.”