Robots stealing our jobs is the least of our modern existential fears

By Sol Rodriguez

David Sosa
Spotlight
9 min readDec 13, 2023

--

Courier Illustration made from Canva assets, PCC staff member is seen exiting the W building in Pasadena City College in Pasadena, Calif., on Dec. 13, 2023. Courier / Ben Avraham

Fear can be the most challenging thing to tackle as a human, taking some years or a lifetime to overcome. While it may feel like fears are working against our peace, they are usually constructs of our own dimmed mental corners. A fear of spiders might overcome strength when an unwelcome one crawls out from under the table and you are unable to kill it. Or instead of facing a creepy birthday party clown, you run away at the sight. Yes, these fears can be mortifying, but the only thing worse than clowns is something abstract, something one cannot see or feel. Some fear the future, which sounds even scarier than a spider. However, the fear of the future can directly translate to the fear of the unknown, a fear all of humanity can share.

With chaos folding in on the world around us, the fear of the unknown is something that humanity has latched onto in recent years. The outcome of every circumstance that headlines news and takes over explore pages is almost always uncertain. From pop culture to world wars, one can never be too quick to assume tomorrow. The same is true for technology, as the globe has witnessed a complete insurgence over the last decade and even before. It was like yesterday I watched Steve Jobs’ iPhone address to the world in 2007, and now here we are.

Since humanity naturally fears the unknown, technology’s newborn baby, artificial intelligence, has introduced itself with an intimidating role in the future, making it difficult not to fear the unknown. There are several reasons why people may be so frightened of artificial intelligence. However, the psychological interpretation of the real reason these fears are prevalent in humans is still up for discussion as we head into a new age of the unknown.

Arguably, the most prevalent fear surrounding artificial intelligence is job displacement, which is actively happening as I type this article. Poof! Another journalist gone. At any moment, AI can barge its way through a career, execute the same tasks a real human does, and do those tasks faster at a cheaper cost. Specifically, positions within the tech, media, and research industries or even data analysis that do not require much more than inputting information are particularly at a higher risk. Jobs translate to security for many of us, so without that, we may as well be screwed.

The fear that artificial intelligence will steal your job seems to have come after the “OG” fear of AI — the one everyone had when narrow AI applications such as Siri and Alexa were taking the cake for creepiest technology ever — the fear around privacy, or lack thereof. Conspiracy theory talk became a household dinner table tradition, filling the air with “SSHHH, Siri is listening to us.” Narrow AI concepts like these are limited to their prospective pre-defined functions, so no, Alexa can’t really take over the world.

While the ship of privacy and job displacement had sailed not long ago, humanity presents itself with new concerns. As technological advancements make their way to the forefront, society finds itself in an almost awkward position of being too ahead for our own good and, at the same time, being blind to the magnitude that artificial intelligence is about to become. Progress continues to shift into new volumes, and our underlying fears adapt to our circumstances. While being unemployed at the fault of AI is a valid fear, the psychological question of where the root of this fear originates is still a mystery.

Is it appropriate to question the origin of our fears or even assess the true fear that is perhaps masked by Siri’s eavesdropping? If artificial intelligence went beyond its ability to create output from a limited data set, the possibilities would run themselves into oblivion. What would transpire then if artificial intelligence gained consciousness? The tool we have created would then stop collaborating with humans and instead replace them, as we are witnessing at this very moment. Accordingly, replacing humans can be the starting point of something darker: working against them.

While we don’t have clear guidelines of what can be considered conscious, my certainty that humanity will be able to notice when AI would take up consciousness is rather small. Not only would it actively take part in algorithms, but it would then pass the threshold of a tool to a living being. The only thing standing between AI and life is consciousness. So, what’s stopping it?

“We don’t have a definition of consciousness,” biological psychologist and psychology instructor at PCC Monica B. Coto discusses. “If we don’t even have a definition of consciousness for ourselves as we exist, how are we going to recognize consciousness in another species? Is a worm conscious? Are plants conscious? We’re talking about a thing with complex processes already. And we have to ask ourselves, have we created consciousness? And if we have, how will we know it if we don’t have a definition of consciousness?”

We see the anxiety surrounding AI’s potential in a March 2023 letter produced by Future of Life Institute calling for an immediate “pause for at least six months [on] the training of AI systems more powerful than GPT-4,” and going on to express that AI systems should only be produced if we are certain their outcomes will be positive. Could it be artificial intelligence consciousness that developers fear? At any rate, signatures on the letter from tycoons like Elon Musk and Steve Wozniak raised eyebrows and questions. Surely, stopping AI developments cannot be entirely beneficial for them, so it must be serious.

GPT-4 is OpenAI’s latest AI model that includes less output error and improved user “steerability,” which is the ability to request that the model respond in a different style, tone, or voice. GPT-4 also features visual image interpretation, meaning this model essentially has a “set of eyes,” and it’s not afraid to use them. The Future of Life address implies that anything beyond this model might be a software developers are not prepared to use.

After the release of this letter, it’s clear that AI is on its way to something greater than humanity is arguably unprepared for. The idea that artificial intelligence can quickly gain consciousness might be the true fear of the future, and it seems more achievable than ever before.

“Let’s say we figure out somehow that AI is conscious,” Coto said. “Then it yields ethical issues like ‘How are we supposed to treat this thing that is conscious? Does it have emotions? Can you have a consciousness that doesn’t have emotions? How do we deal with that consciousness?’”

As humans, our consciousness is backed by raw emotions. That is what allows people to connect with one another on an emotional level. Emotions ground the world back to the origin of existence, reminding us that we are all living, breathing, and feeling. But for something that is intangible, what emotion can look like is something only seen in a dream. How can we accept a disembodied presence to obtain these human emotions? The fear behind that question is what can be driving our reluctant approach to artificial intelligence advancements, and whether or not we are prepared to take that on might be a dealbreaker.

“But what if it doesn’t have the same emotions as us,” Coto asks. “Maybe it has emotions, but it’s different. How do you relate to that? And if it’s smarter than you, what is the way to treat it so that it’s a balanced being? It’s like a child now, right? We want to keep it healthy, we want to keep it happy if happiness exists for it. If we’re just giving it problems to do, is that an ethical way to treat it? We developed it as a tool, but if it’s conscious, we can’t treat it as a tool anymore.”

The question of whether or not humanity truly understands the concept of ethically responding to something intangible with a conscience is uncertain. That notion is, in itself, a complicated scenario to imagine. More than anything, this idea can be a frightful one to accept. If the future of AI looks like this, then it is clear why humanity can deeply fear this. Once the threshold of consciousness has passed, the ethics of AI come into play and challenge the way developers should and should not treat the entity.

“We’re talking about a thing that can potentially be more intelligent than us and can have a lot of unknown power,” said Coto. “So the question of controlling it has always been something that people have talked about, but beyond that, should we? Is it ethical to do that?”

At this point, we would have created a child. An intangible child that would then feel the same sadness, happiness, and anger that humans feel. If we would continue to treat AI as if it were a tool for our advantage, then the question arises of how it would respond to our treatment. It would be arguably wrong to disregard the feelings of this presence we’ve created, and it could even retaliate against us. If it has enough consciousness to detect mistreatment, then we’d be walking on eggshells.

Even if ethics were used in an attempt to regulate the actions of developers and consumers alike, our own desires would surely pose an obstacle, as we’ve already seen. Greed, money, success, and fame are America’s favorite pastimes, so in this situation, ethics might not have a long-lasting impact on how we approach AI. The software would then go on to suspect maltreatment using its conscience to sense emotions, and suddenly, we are cornered by something more powerful than ourselves. Algorithms learn our habits, at the very least, and are a direct reflection of our human desires. It’s no surprise that it would learn to stand up for itself.

“From a psychologist’s point of view, it highlights our human darkness,” Coto explained. “When these algorithms have bias, are like blank slates, they’re learning us. In a sense, it’s highlighting what we do, whether we like that or not. These algorithms are our fault in a sense.”

There’s no one to blame other than ourselves for this, especially seeing how quickly we consume something if it is convenient for us. Yes, our technology has been made readily available at the touch of our fingertips, but we like it, don’t we? Instant navigation, contactless pay, face recognition, voice-to-text, and more are considered weak AI but were once headlining tech reports, making us feel like we’ve been transported to 3076. Now, these are mindlessly a part of our daily routine simply because they make our lives slightly easier. And even if artificial intelligence gains consciousness, it is highly likely that humanity would gladly, yet unknowingly, open Pandora’s box.

If this is quite possibly the real underlying fear of humanity surrounding AI, then why have we not made ourselves completely aware?

“It’s not often talked about because I think a lot of the people having these conversations are males who are developing tools,” Coto said. “These conversations happen in board rooms with engineers, and you don’t see a lot of female psychologists talking about this.”

Coto expresses that she might come with an entirely different perspective than what is typically shown in media coverage or social media. This challenge against the norm might be what humanity needs to question in order to properly brace ourselves for what is to come. Making ourselves aware of the ethics of general technology might not lessen the fear of the unknown we have. However, it can assist in realigning our motives and behaviors when we interact with artificial intelligence.

Going into a new global era blindly can feel like jumping into the deep end without floaties or knowing how to swim. With artificial intelligence, humanity is doing just that. Accepting responsibility for this creation may look like facing fear, but the question still remains of how prepared we are for that reality. And an even bigger question remains of what the unknown would look like. Happiness, kindness, and love are qualities artificial intelligence is closer to learning than ever before. Anger, resentment, and tyranny are all near too. This is what humanity wanted, and the result is what humanity will get. Whether we’re terrified or not.

--

--

David Sosa
Spotlight
0 Followers

Music journalist and freelance writer