Saving Humanity From Dangerous Artificial Intelligence Scenario
An overview of how the Westworld TV show displays a perfect set of solutions to prevent bad things from happening
Existential risk from artificial general intelligence is the hypothetical threat that dramatic progress in artificial intelligence could someday result in human extinction
More and more people are becoming aware that truly smart things are already here and we are definitely seeing a massive trend of artificial intelligence being used across commercial products. This makes people anxious, especially after watching a couple of episodes of Westworld. And knowing there is an AI in your todo list or in Alexa device on your kitchen table doesn’t help that feeling at all.
Often we hear of the bright minds of our world talking about existential threats and dangers of AI in a vague manner. We talk about implications but we rarely sit down and actually talk through possible simple solutions how to prevent inevitable scenarios.
Lets try to systemize those simple solutions that everyone can understand through the prism of an effective architecture that runs the amusement park at Westworld, which I’m truly amazed by, much more than by the plot.
Lets imagine there are two super AIs capable of destroying the human race. One is a supercomputer sitting somewhere at the IBM data center (known as Watson). The second one is a supercomputer sitting in your living room. As a human, or as a human-like robot.
Now let’s assume that the AI personality goes wild, like the peaceful leader Gandhi in a computer game in which he starts destroying the world with nuclear weapons because of a developer’s bug.
From the very first game in the Civilization series through to today, India’s supposedly-peaceful leader Gandhi has…kotaku.com
Which one has more power to destroy the human race? The soft humanoid body that you can shoot or the data-center monster that no one even sees apart from the system administrators?
If you think about it, the internet and global connectivity are far more dangerous than we realize, and the main reason is that they are entirely hidden from our basic senses and our brain. We only see the end results. For example, a Medium post of someone, may not have come to be from the actual action of an author typing. The scope of global connectivity is difficult to grasp. We can’t effectively perceive of millions of people typing their Medium posts simultaneously, as well as we can’t see deviant AIs doing crazy stuff behind the web.
Visibility of Actions
How soon would you notice that the robot goes wild and starts to display deviant undesired behavior? Or moreover wants to destroy every human on Earth just because something glitched or some crazy researcher decided to run an experiment? We all could agree that it’s far more noticeable than having IBM’s Watson silently take over all of the world’s communications and start nuking Russia.
One of the main impressive technological showcases in the show is the notion of Explainable AI, which by the way is a hot topic funded by the Defense Advanced Research Projects Agency (DARPA).
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued…www.darpa.mil
We’ve come far to create super intelligent machines but we are far from understanding how they are achieving certain results. Which is critical to debugging why certain machines take certain actions.
The Explainable AI (XAI) program aims to create a suite of machine learning techniques that:
- Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
- Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
Physical Ability vs. Digital Ability
Of course, there is a painful risk associated with giving AI a physical presence, but it’s an individual risk rather than global human race risk. It is hard to imagine how a single robot can stab a million people or get through armies to kill world leaders. Far more imaginable is an IBM supercomputer silently hijacking a drone to send missiles into the White House.
Magnitude of Influence
How influential is a single person/human being versus a system of computers? A naked person on the street can definitely influence and grab attention of a number of people but those are very limited numbers. How more influential would a supercomputer be that hijacks Facebook and Twitter to display fake facts to stimulate the needed behavior? Or even generate the voice of Barack Obama and produce a video of him talking on a desired topic?
The new Adobe Voco app is another example of how Digital data is different from the physical world. Digital information…www.huffingtonpost.com
You know that scene in the classic film Bruce Almighty when Jim Carrey uses his God-like powers to mess with Steve…www.theverge.com
With the power of social networks and how fast the fake news spread nowadays, dangerous AI may have even more power over a crowd than we can imagine.
“The whole is greater than the sum of its parts.” ― Aristotle
We, as humans, are very input/output dependent creatures, we still can’t communicate wirelessly or directly through our brainwaves. We still can’t communicate faster than 150 words per minute on average. Moreover sometimes we can’t even understand each other while speaking the same language.
Computers are the opposite. If you ever played any games with even a basic AI you know that they don’t have to communicate with themselves through the chat window as you and your friends do. They use internal communication which is way faster and follows a direct structure with no misconceptions, in contrast to our primitive form of communication called language.
Can you imagine a robot trying to explain to another robot why they should destroy all humans? I’m sure we will see good attempts on that in the next episodes of Westworld where Maeve continues to assemble an army of her own. Yet I hardly imagine that happening in a real world environment, where robots are killed every day for the sake of entertainment, unless we will create such a park for real. I truly hope not, otherwise we are even more screwed as human beings.
Thus, the communication protocol limited to the primitive human language might be a great preventive measure, even more effective if you think how much we’ve recently achieved in surveillance and spying on specific individuals which may one day be robots.
If you are following Westworld, you can clearly see that no machine is connected to a network, that’s why the only way to connect to it is to wirelessly communicate through the “tablet.” And the only way to influence behavior of a robot is to directly speak to it or use some kind of local wireless stimuli that makes all robots in an area stop functioning. No internal connection or output to the “mother” system exists for obvious reasons discussed here, since it’s a direct risk of letting AI get out and influence it’s own behavior.
This is a concept that already works within military, police and even corporate structures, where you obey the orders of someone higher than you. As in the show, there is no single scenario where the creator of the park was ever harmed by the machine, unless directly programmed to do so. But other guests were killed, which means in theory a machine can harm any human, other than it’s creator. Which can be implemented by using the notion of control hierarchies. Once we reach the spread of AI across our everyday life, including police and other crime preventive forces, we will definitely have to remove the limitations of not harming humans and move towards the unquestionable order obedience of the hierarchy.
These two weapons are the chocolate and peanut butter of robot warfare. In 2001, CIA agents got tired of looking at…idlewords.com
This thought is tough and brings another magnitude of risks associated with misuse of robots by humans. But those are still human actions. If a human decides to end humanity it’s not really a dangerous AI scenario, right?
It may seem an obvious idea that a robot should do precisely what a human orders it to do at all times. But researchers…qz.com
Is Creating Physical Creatures a Solution?
It does seem like we need to encapsulate AI inside a human-like body and be able to address all of the aspects above to make most of the risks of a so-called dangerous AI more feasible to prevent.
It still doesn’t guarantee anything but at least makes it much harder for an AI to go crazy and destroy the human race in a set of unnoticeable actions. At least actions will be trackable and there will be a much higher chance of prevention, especially if supported by 7 billion individuals interested in doing just that.
Like this article? Please recommend it, so it can reach even more people ❤