When Bad Robots Kill Good Scientists
The recent viral allegation that 4 robots killed 29 Japanese scientists crystallizes our fascination surrounding the topic of fear towards AI.
“This is how we are gonna die,” said a Twitter user, describing how he felt about a viral video captured at the Los Angeles Conscious Life Expo. Shot on February 10th, 2018, the recording framed a speech by a popular journalist. The main focus of her presentation entitled “Is AI an Existential Threat to Human Civilization?” was to bring awareness to the dangers that AI poses to humanity. What made her speech so popular was her central statement: She declared that four robots had killed several scientists in a Japanese laboratory. While the clip in question garnered international attention, it was not supported by evidence. What this incident brought to light, however, is the hysteria surrounding the topic of Artificial Intelligence.
Distrust in super-advanced technology is not recent. It’s common to hear that AI will outsmart us, enslave us, and why not, kill us. Numerous science fiction stories, from Asimov’s “Robot” to the recent “Westworld” series have tapped into the public’s general fear of AI; these stories reflect the masses polarized view toward it. Accordingly, the reports that yield this angle tend to be more ludicrous than the stories that cover a more realistic or benevolent side of the field. But is AI capable of being evil? Researcher Richard Loosemore thinks that AI doomsday scenarios are incoherent, arguing that they always involve the assumption that Artificial Intelligence-based machines are supposed to be able to acknowledge and discern the concept of harm, which is embedded into human consciousness.
There was a lot wrapped up in the video that made headlines. For instance, it sustained that Tesla CEO Elon Musk knew about the robotic murder spree and that like him, everybody should be made aware of the nefarious potential of super-intelligent machines. “This is a big deal,” the informer uttered with a rigid demeanor. Her account alluded that the government was involved in the cover-up of news regarding Japanese scientists that fell victims of super-intelligent AI. The journalist and conspiracy theorist went on to state that “lab workers deactivated two robots and took apart the third, but the fourth robot began restoring itself and somehow connected to an orbiting satellite to download information to rebuild itself even more strongly than before.” While the reporter claimed to possess documents to prove her information, she has yet to show any clear evidence to crystallize her account.
Although viral news about terror and AI might seem trivial, it is, in fact, crucial concerning today’s attitude towards the field of Artificial Intelligence. As the fact-checking site Snopes Fact Check puts it “At best, the claim that 29 scientists were killed by AI robots in Japan is based on third-hand information unsupported by any actual evidence. At worst, this rumor was made up out of whole cloth as an attention-grabbing anecdote for a speech about how human beings are merely the artificially intelligent creations of an alien race.”
Proponents of AI doomsday scenarios tend to share the assumption that that “intelligent” machines will be conscious, that is, they’ll actually “think” the way humans do. Consider Philosopher John Searle’s proposal, coined as “The Chinese Room Argument;” it maintains that no computer can have a true mind. What’s more, critics like Microsoft co-founder Paul Allen believe that we have yet to achieve artificial general intelligence, in other words, intelligence capable of performing any intellectual task that a human can, because we lack a scientific theory of consciousness.
News about AI killing humans become viral because they encapsulate our biggest concerns, as we inevitably feel drawn to the notion of fear. What most of us can agree on is that we don’t know whether AI will propel the best era of human existence or if it will have negative consequences at all. By learning about technology with a critical and open attitude, we will help AI remain a beneficial tool. It becomes more apparent that throughout our journey in pouring our essence into code, the field of AI needs to be perceived as objectively as possible. After all, we have to achieve a far deeper understanding of the human mind, in order to master truly intelligent machines.