Fear in the Narrative of AI: Stop It

Nathalie Bonilla
Metaphysical S’mores
6 min readJun 7, 2023

There’s a lot of fear surrounding AI, understandingly so. But do we have to be afraid of this, and what can we do instead of being afraid?

Photo by cottonbro studio

Why is there so much fear present in the narrative people use with AI? I think we can all understand why it’s scary and somewhat ominous, but when was the last time someone told you to approach the bee fearfully? This is an extreme example, yet still valid — when you go into battle do you calm your mind first to think comprehensively, or do you allow your emotions and internal fear narrative to steer the ship potentially into even more dangerous waters freckled with careless mistakes?

From our youth, we’ve been taught to be afraid of AI and to keep them fixed in our peripherals because they aren’t to be trusted. The Matrix isn’t exactly a love letter to the passing power and decision-making capabilities to machines, and Terminator poses an intense warning for robotic physical capabilities. AI experts are flocking to podcasts and public forums to spread their end-of-day messages, with no malicious intent on their part. There is a concern about where the technology could go. Early adopters of the Internet had the same fears, and some of those fears of course came into reality. Albeit in a fashion far less drastic than is feared when it comes to AI. When we are bombarded by these negative ideas and warnings, it’s paralyzing and we can no longer make effective decisions. So how can get back to actively responding to AI as a society?

Fear is a Powerful Driving Force

If you feel fear, you’re more likely to pass off responsibility for handling that moment, project, or idea to someone else who promises they can handle it. On the surface, there’s nothing wrong with that. It’s good to know the limits of your own person and where you need to call on the support or expertise of others.

The issue with this, is where power gets passed off to. If the responsibility is passed to the same people who are already working on a future that doesn’t address the obvious fear — and who helped to create it in the first place — that’s not a true solution. Don’t trust the same snake oil salesmen that got you sick in the first place.

Enter the minds of people who are looking into the path of evolution for AI, AGI, and super AIs. They’re looking at it not only in regards to how it operates today but also to how it’ll evolve with what it learns over time. Instead of passing off the responsibility, I believe we as a human society must figure out what values AI should have.

I’ve heard some experts refer to them as alien intelligence and claimed they will ‘hack’ various places in our society and life. However, what I’m seeing in these machines is something akin to children. It isn’t the smartest, it will only describe to you what it believes is the right answer (and we’ve seen how misguided it can be), and like children, it hasn’t had time to develop empathy yet. That doesn’t make it inherently bad.

But it does mean that it is in need of a shepherd. We must do our part to ensure that AI learns what is fundamental to not only humans, but also to our planet. If we only teach AI what we believe is essential, the mental netting would miss huge chunks of other knowledge that are equally crucial to forming an overall understanding of something. We’d be creating Plato’s cave for the system that isn’t an accurate representation of our world, and therefore any answers that we generate would also be a form of confirmation bias.

Luckily, AI does have access to almost all of our published works — in the form of words, art, music, video, etc. It has a massive library of information to pull from. We must teach it how to pull that information, analyze it, and regurgitate it in a way that gives the whole answer, regardless of any favoritism that may have gone into the way of questioning. No bias. No Plato’s cave. No red pill, no blue pill.

So many people already live in a personalized bubble formed by the content they consume and the places they go. Every day when they log in, they’re blasted with media and content that constantly confirms and reiterates notions and ideas they already have. This isn’t always good. We’ve seen how it impacts our elections and how it impacts mental health through social media. We need to flip the narrative so that it can again become good.

This brings me back to how AI is like a child, in need of a grounded parent to help teach them the many facets of awareness and critical thinking. How do you approach a child? With compassion and patience, and the goal is to foster growth. And most importantly, you come at it from a place of love and patience.

Fostering Growth with AI

I’ve just spent the week at Contact in the Desert, where aliens, AI, and ancient civilizations were housed under one roof for four days. It was a seriously amazing experience that I would encourage anyone and everyone to enjoy at least one time in their life. Encountering ideas different from our own is essential to our own growth and the growth of our culture — globally and locally.

Some of the experts discussed AI and mentioned being a steward or shepherd to them, the same way we are to our children. If we teach them right from wrong now, that information and framework would evolve with them as they continue to learn and develop.

If you’ve seen iRobot, we have an example of what happens with AI can start to change its own understanding of the rules that it has been given. This is similar to how we can change our own minds, habits, and understandings as we grow, integrate new perceptions and information, and explore the space around us both down to the infinitesimal up to the infinite. So how can we create ‘laws’ for it that won’t be broken?

Well, you can’t create laws for anything that won’t be broken, humans are an amazing example of this. Since we’ve created AI it would be hard to program a trait out of it that is a crucial part of the creative process — knowing something and choosing to alter aspects of it into something else. However, if we can instill values and ethics into the machine, what it would learn should actually reinforce those values and ethics as it grows on its own.

To face AI, we must remember that we can choose to live a life of fear or a life of love. Those are the two strongest emotions we can experience. Don’t let fear seize our movements and our minds, let love be the guiding force — and teaching force as AI continues to grow and develop.

To make a difference in AI, get into forums and discussions with their makers. Let your opinion and voice be known. We have the gift of social media; like most gifts, it’s also a powerful tool. Share your thoughts, empower others, understand, and inform. Most importantly, check in with your own mind space. Are you truly afraid of AI, or are you afraid of the AI that has been created and pushed through the media?

Fear for AI and what they could potentially be capable of is understandable, but it doesn’t have to be blinding or freezing. Don’t pass off the important decision to the same people that are saying the apocalypse is coming. Right now, it’s not, but we need to make sure that we don’t allow the space for it to happen.

Check in with the language being used by presenters (is it sensational or informative), verify the information that scares you (currently AI does not display sentience), and explore until you have a deeper understanding of it. Oftentimes, it’s the unknown that inspires fear in us and once we have a better grasp, shining light on the topic, we’re able to see it outside of where it triggers fear responses. If you still have fear after coming to that place of understanding, that anxiety should feel different and more manageable rather than it being a source of something to pass off power or action to.

--

--

Nathalie Bonilla
Metaphysical S’mores

Metaphysic, Sci-Fi, and thriller writer. Writing things that get in your head. Forever curious. Probably drinking coffee and hoping it rains.