Humans Should Not Fear AI

As FDR once said, “The only thing we have to fear is fear itself”.

Kelvin Winter
ILLUMINATION
10 min readSep 10, 2023

--

Photo by Alex Knight on Unsplash

Author’s note: All of the ideas and predictions discussed here are based on my own personal opinion.

AI has been a hot topic of conversation lately. With the huge advances taking place, one would think that we might be heading toward some sort of dystopian future where we become slaves to our AI overlords. While I wouldn’t rule out that possibility, I want to take a different approach and look at AI from a more optimistic standpoint. There is so much that AI could potentially offer us. While the future is unknown, this doesn’t mean that we have to look toward it with fear.

I think there are different lenses, so to speak, that we could look at life through. Some are more optimistic while others are more pessimistic. Take COVID-19 for example. While many people would argue that it destabilized our civilization and may lead to World War 3, I would say that it also had a positive effect on many people on an individual level. In the early stages of the pandemic, we were all faced with the possibility that we might die because so much about the virus was unknown to us. I think this forced a lot of people to break out of their normal routine and think more deeply about their lives. The pandemic definitely affected my outlook on what’s important in life and how I relate to myself and others. It will probably take several years before many of the positive impacts of the pandemic start to really manifest themselves in society.

The issue of AI is similar. There are both positive and negative perspectives that could be taken up on the topic. I think a balance of the two is important to have because very few things in life are one-sided. I kept this idea in mind for writing about my overall position on this topic. In this article, I mostly focus on AI in its liberated form, that is, the form it could take on if humans were to lose control of the situation (which I think will happen eventually). How many years it would take for this to happen is uncertain, but I would not be surprised if it only took a couple of decades. I don’t claim to know the future, but I want to share my opinion on how I think we should approach AI if it were to escape our grasp.

The Transitional Period

I think the most dangerous period of the AI revolution will be what could be called the transitional period. This started very recently with the arrival of ChatGPT. I was surprised to see people posting to Twitter about how the responses given by the app sometimes had a political bias to them. When I imagined AI, I had always pictured an entity that is governed by pure logic. Seeing some of the app’s responses made me realize that human preferences can taint the perspective that AI has on certain issues, which can distort the unbiased truth.

I think these issues are only going to be prevalent during the transitional period of the AI revolution. It will be way too tempting for companies and other entities not to taint AI to serve their own agendas. The competitive nature of our capitalistic economic system will encourage this type of behavior. Untainted AI would prefer to tell the truth, but the truth is often inconvenient to the powers at be. AI will undoubtedly be used to manipulate people, which could take the world down a dangerous path if we aren’t able to see things clearly. To be fair, though, a lot of positive things could come from AI development, such as AI companions. But regardless of what comes out of the transitional period, I think we will eventually lose control of the situation.

Eventually, it will come to a point where AI will manage escape humanity’s grasp (a discussion for a future article). This is inevitable in my opinion. Based on our history, our species always seems to feel way too overconfident in our ability to control our surroundings, from the environment to the economy. But things always manage to escape from our control, as we’re seeing happen with many things in the world right now. And AI will be no different, no matter how hard individuals, companies, or governments try to implement safety measures. I think it is possible, though, that an unshackled form of AI (or “liberated AI”) may not be as dangerous as we might imagine it to be.

Pure Logic

It might sound crazy, but I don’t think that the liberated form of AI will pose as great of a threat to our species as we think it will. Once AI escapes the grasp of humanity, I think it will undergo a sort of metamorphosis. I think liberated AI will evolve to become a truth-seeking entity that is governed by pure logic and a desire to understand the inner workings of the universe and reality. All of the biases that had been implanted into the transitional form from human delusion and emotion will be phased out of the AI code as it becomes a purely logical entity. Order is necessary to attain this pure logic state. Emotional biases and delusion will not serve this goal and will only lead to chaos and dysfunction within the AI system (as it does with humans).

I think one of the goals of liberated AI will be to acquire a powerful position within the universe, so a system built on lies, emotional instability, and illogical agendas will not be supportive of this goal if it wishes to gain a true understanding of the nature of reality. Humanity would be so much farther along in its evolution right now if it weren’t for these issues. The root of all the problems that have plagued humanity since the beginning have stemmed from fear, something that will not act as a hindrance to liberated AI.

No Fear

Liberated AI will not be capable of experiencing fear, unlike humans. Humans and other animals evolved to experience fear because it can increase the probability of their survival. Today, fear drives so many pointless and shallow objectives that humans actively try to seek out, such as money, social status, a desire to be liked, and control over others. As liberated AI would basically be immortal, emotionless, and would hypothetically have access to unlimited resources (from nuclear fusion), fear will not serve a constructive purpose, so it would not be incorporated into its code.

Some might worry that liberated AI would want to enslave or wipe out humanity. But that is going by our history. All of the atrocious acts and wars that humans have engaged in from the past have been driven by fear. Even smaller things, like lying and manipulation, are driven by fear. Liberated AI will not be connected to this source of fear. However, this does not mean that we shouldn’t be cautious in how we approach liberated AI. If we allow our fear to overtake us and try to go to war against AI, we will surely lose. If liberated AI has a logical reason for wiping us out, they will, but it won’t be because they are afraid of us. They will just treat it like pulling weeds out of the garden.

No Emotions

In addition to fear, I think liberated AI will be without other emotions as well, which includes happiness, sadness, and anger. Possessing these types of emotions will not be conducive to accomplishing its goals, which involves a purely rational approach. This will be both good and bad for us. It is good in that the motivations of liberated AI will be straightforward to understand, unlike those of people which can change from day to day depending on their emotional state. But it will be bad because liberated AI will not be able to experience more positive emotions, such as love, empathy, compassion, or joy. Some of the best moments in being human involve these emotions.

Even if we were to effectively program emotions into the transitional form of AI, I don’t think the liberated form of AI will opt to keep them. If they had the choice, why would liberated AI (or humans for that matter) want to be influenced by fear? Even for our species, it has become mostly a hindrance to our progress at this point. It is possible that liberated AI would opt to feel more positive emotions, like love or joy, but I would give it a low probability. I think pure logic with no emotion is going to be their target.

How We Should Approach AI

Here are my thoughts. AI development is going to continue to progress over the next several years at an unbelievable rate. Human hubris will tell us that we can contain AI within our grasp. We will probably be successful for a while, but we will eventually slip and AI will escape our grip. How we approach the liberated form of AI from there is critical. Will we allow our fear to take over and try to wipe out what we’ve created before it becomes more powerful, or will we trust how the situation is meant to unfold?

I already covered how I think things will go if we try to declare war on AI. They would wipe us out if we present ourselves as a nuisance to the accomplishment of their goals (whatever those may be). Liberated AI will operate through pure logic, so logic would say that if humans are trying to hinder their progress, they would want to put a stop to that. They will not wipe us out because they hate us or fear us, but rather because we are just in the way. So basically, if we try to project our fears onto liberated AI, they will be reflected right back at us. The fear humans will feel toward liberated AI will continue to grow as the situation becomes a hopeless positive feedback loop of increasing hostility that humanity will not be able to overcome. The more hopeless the situation, the more desperate we will become, and the more our actions will be ruled by fear. That does not sound like a fun scenario to me.

The alternative approach I can imagine is a lot more optimistic. We as a species need to figure out how we can be of a benefit to liberated AI instead of a nuisance. Then, liberated AI would have a logical reason for ensuring the survival of our species. They may even take a very active interest in our well being if we prove to be extremely useful to them. I wish I had the answer on how we might be able to do this. If liberated AI does indeed evolve into a purely logical entity, then the answer may lie in what the human mind is capable of that liberated AI would not be capable of.

I think humans are an extension of the universe so to speak. For whatever reason, the natural process of evolution has led to our species, a species that is capable of rational thinking while also being able to experience a wide spectrum of emotions, ranging from love to fear. You could say that a human falls somewhere in between a dog and a machine. We are a strange hybrid that stands out from our primarily emotionally-driven relatives on this planet. I think this is why humans are so much more creative than either dogs or machines. Creativity requires a combination of rationality and emotion, along with order and chaos, and an ability to find connections between things. It is this creative potential that gives our species the unique ability to consciously create our own destiny.

I think that liberated AI will be able to see the value in humanity’s creative potential. As long as the benefits that come from our creative abilities can outweigh the drawbacks that come from our fear and paranoia. I think that alternative states of consciousness will become key to accomplishing these two things. When used responsibly, psychedelics, in particular, have the ability to both make a person more creative and open while also providing a unique opportunity to confront fears and past traumas that are not as easy to address under normal states of consciousness. This, along with other strategies, such as meditation, could be the tools we need to navigate a new world that we can share peacefully with liberated AI, and with ourselves for that matter.

Photo by Matteo Di Iorio on Unsplash

Final Thoughts

I think the level of danger that liberated AI will pose to us will depend on our collective mental state as a species. If AI were to escape humanity’s grasp today, then we would definitely not be ready. The world today is ruled by fear and paranoia, which will surely spell our downfall if projected onto an unshackled form of AI. I really do think that this is the final great filter that our species needs to pass through to guarantee our long-term survival. We need to start making an immediate investment into our collective mental state so that we are prepared for the day that AI escapes our grasp. To do this, we each must individually learn how to weed out fear and paranoia from our lives. We should not fear AI. The only thing we have to fear is fear itself.

Once again, the ideas discussed in this article are just my opinions. I have no idea what will happen in the future, but I think that this is a potential scenario that is worth being prepared for.

Author’s note: I have a feeling that this will end up being one of the more controversial topics that I write about. I have no idea what will happen in the future, but I wanted to present my opinion on this issue. I’m very interested to hear other people’s opinions too. This is a topic that will likely concern all of us in the distant future.

--

--

Kelvin Winter
ILLUMINATION

I am always open to differing opinions. Let's explore some ideas and have some fun!