The reasons why we are so scared about a self-conscious Artificial Intelligence

Luisa Simone
Predict
Published in
8 min readOct 18, 2023

…and what is real about it

The sci-fi apocalyptic scenarios have always persuaded us that a self-aware artificial intelligence is not only possible but almost inevitable, and it will be extremely dangerous for the human race. There are several factors that have played a role in shaping this perception within the collective imagination: the chosen name for it, its functioning largely obscure to the majority of people, the nature of communication surrounding it, and what we might term its “humanization”.

The name

First and foremost, the term “artificial intelligence” itself can be quite misleading. It was coined in 1955 by John McCarthy, a young Assistant Professor of Mathematics at Dartmouth College, who aimed to organize a summer seminar to explore the concept of thinking machines, encouraged by the excitement surrounding Alan Turing’s groundbreaking work. The Dartmouth Summer Research Project on Artificial Intelligence took place the following year, with the ambitious goal to create, in just two months, a machine that could fully emulate human intelligence: an Artificial General Intelligence (AGI).

Using the term “intelligence” in this context seemed plausible at the time, but the stark reality was quite different: the fundamental essence of human intelligence lies in its ability to engage in multiple and simultaneous processes, a complex orchestration carried out by biological neurons within intricate systems. As artificial intelligence endeavored to emulate these processes, it encountered significant challenges and achieved only modest outcomes. There are many proposals for alternative terms that would be more effective, such as “learning algorithm”, but none are now as well entrenched in collective knowledge as “artificial intelligence”. It would take a concerted effort on the part of all scientists and programmers, as well as the press and media, to try to replace it.

Nowadays, we can create algorithms and models that excel at processing vast amounts of data, identifying patterns, and making accurate predictions. The exceptional intelligence power of artificial intelligence lies only in this and nothing more. Therefore, artificial intelligence is still far from the ability of replicating the multifaceted nature of human intellect, including consciousness, emotions, creativity, and adaptive reasoning. We still need to work hard on research, ethical considerations, and a deep understanding of the relationship between technology and human consciousness.

The Dartmouth Summer Research Project on Artificial Intelligence, 1956

What happens under the hood

Artificial intelligence plays a very curious social role. On one hand, it is accessible to everyone, indiscriminately: we interact with it daily, sometimes knowingly and just as frequently unknowingly, or we have become so accustomed to using it that we no longer realize we are using an artificial intelligence. A technology so universally pervasive should, within its limitations, be familiar to everyone: we all have a vague idea of how a car, a lightbulb, or a camera work. However, when it comes to artificial intelligence, the matter becomes much darker: the majority of people who use it have no idea of how it functions.

They have led us to believe that artificial intelligences can take completely uncontrollable directions, that they are capable of coming up with incredible solutions in manners beyond our imagination. However, it’s important to stress that this portrayal is not entirely accurate. Artificial intelligence is an extraordinary field of science, enabling us to create things that were unimaginable just a few years ago, but we know very well how it works. While AI indeed exhibits remarkable capabilities, its functioning is well-documented and comprehensible, enabling us to harness its potential effectively and responsibly if we approach its development and deployment with careful consideration and an acute awareness of the associated risks.

Not having a clear idea of how artificial intelligences operate and the extent of their capabilities only fuels our fears of potential takeover. Digital literacy should start from here, both to educate non-experts on how to use them — because if used improperly, they can indeed be dangerous; and also to reassure the public about the actual risks we face, ensuring we focus on the right aspects.

The communication

In 2017, an incident involving two Facebook AI sparked worry and alarm when they started conversing in an unrecognizable and enigmatic language, leading to the suspension of the experiment. If you were to search for this news online, you’d come across several articles instilling fear. However, delving deeper into the research, it turns out that the dialogue between the artificial intelligences was something like this:

Bob: “I can can I I everything else”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to”

This unintended consequence resulted in a language that was incomprehensible to human observers, prompting the termination of the experiment. The programmers hadn’t specifically programmed the AIs to communicate only in English: their main goal was to create AIs capable of negotiating with people, a task that demanded linguistic versatility beyond a single language. Restricting the AIs to communicate solely in English would have been a limiting factor in achieving this objective. Moreover, the programs were designed to learn from conversations and interactions with others, and by making them interact with each other, they got stuck.

However, it took a considerable effort to find this information, highlighting the chasm between sensational narratives and the complexity of AI research. The prevailing tendency is to sensationalize such events, perpetuating misunderstandings and unfounded fears, not just in news articles but also across various forms of media like novels, films, and video games.

This story is a reminder of the necessity to critically examine the articles we read and the news we believe; navigating through sensationalism and fear it’s not easy, but it becomes necessary when we interface with the journalistic tendency to create click-bait headlines and the need for multimedia products to be increasingly interesting and competitive, even at the cost of spreading misinformation.

The case of Facebook chatbots is not isolated but has had a particular echo in collective knowledge, so much so that it is one of the things I am asked about most often during events. People are scared, as they should be when misinformation about something they don’t understand is being spread. It has been clear to us for a while that we have a serious problem with unchecked fake news spreading on the web; the problem becomes even more uncontrollable when even reputable journalistic outlets, which normally verify the news and are reliable, resort to headlines that must, at all costs, grab attention.

Still from M3GAN (2023), Courtesy of Universal Pictures.

The humanization

As humans, we have a natural inclination to anthropomorphize animals, objects, and even inanimate entities. We give plants names, vent our frustrations at malfunctioning computers, and even offer words of encouragement when they encounter challenges like climbing a steep hill. This inclination to humanize extends to objects and beings with endearing or human-like qualities, becoming an irresistible aspect of our nature. The phenomenon concerns our daily life in its entirety, but it is exemplified notably in cinematic culture: the cute robots of Star Wars easily become our favorite characters, and we suffer when they are destroyed or abandoned, even though, as in Interstellar, we are repeatedly told that they are just robots programmed to obey, devoid of consciousness, and cannot suffer.

During the years 2014 and 2015, a delightful little robot named HitchBOT was created in Canada. HitchBOT took on the role of a hitchhiker with the remarkable goal of journeying through parts of Europe and the United States as a social experiment to gauge whether robots could place trust in humans. Although he lacked the ability to walk, HitchBOT could communicate, sharing the captivating tales of his adventures, with the aim of engage people, spread their stories and bring a sense of connectivity to the world. He meandered along the canals of Amsterdam, spent time with a heavy metal band, and at the end of one of his trips, he expressed to his companion, “My journey must end for now, but my love for humans will never fade. Thank you all.”

HitchBOT rode along with two boys who were heading to summer camp.

At the beginning, the experiment seemed successful, but the outcome was far from positive. Tragically, this cute, trustful little robot met a heartbreaking end in Philadelphia, where he was stripped, beheaded, and broken irreparably. Someone proposed to repair or rebuild it, but that would have been pointless: the experiment had failed. While we contemplate whether we can trust robots, this poignant incident highlights that robots, in turn, cannot place their trust in us.

Isn’t it very sad what happened to HitchBOT? Doesn’t it make us feel bad, almost like it happened to a little boy?

The popularity of fear-inducing news and science fiction movies portraying AI uprising against humans can be attributed to our inclination to anthropomorphize artificial intelligence. We strive to make AI as human-like as possible, often crafting robots in our own image. In our imagination, when we picture an AI taking over the world, it’s not just a faceless entity; it’s an AI with a face — perhaps blue or metallic, but a face nonetheless. It has a body and communicates in ways resembling our own. Essentially, it looks and acts like a human.

When we humanize AI in this manner, it becomes emotionally relatable to us. We start to perceive it with qualities akin to a human being. If anything adverse were to happen to this AI, our inclination is to feel compassion, just as we would for a fellow human. It’s this human-like resemblance that makes trusting AI a complex matter. Indeed, rebellion, power-seeking, and using violence to achieve it are behaviors that humans have exhibited throughout history, displaying exceptional ability in such actions.

We don’t trust artificial intelligence because we don’t trust human beings. Artificial intelligence certainly has the power to be very dangerous in the wrong hands, but it will never choose to act maliciously of its own will. This is a prerogative of humans, not machines.

Conclusion

In conclusion, the fear surrounding self-aware artificial intelligence is significantly influenced by our inclination to attribute human-like traits to AI and by our illiteracy about its working, fueled by sci-fi narratives and sensationalism; but we need a more widespread and specific knowledge, because artificial intelligence is pervasive, and it is always dangerous to use tools that you don’t understand. Knowledge and digital literacy are the keys to demystifying AI. We should start from there.

References

- The Dartmouth College workshop: https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth
- The HitchBOT story: https://edition.cnn.com/2015/08/03/us/hitchbot-robot-beheaded-philadelphia-feat/index.html
- The Facebook experiment: https://cemse.kaust.edu.sa/cbrc/news/facebook-ai-new-language-explained-kaust-professor-xiangliang-zhang

--

--

Luisa Simone
Predict
Writer for

I stepped out of my comfort zone, where I was studying Philosophy, to study AI. Now I’m a kind of hybrid passionate about Philosophy of Science and Technology.