Are You Afraid That Artificial Intelligence Will Be the End of Humanity?

Are we right to fear AI? How will AI shape our future: catastrophe or technological boom?

ZZ Meditations
ILLUMINATION
15 min readJul 26, 2023

--

Are You Afraid That Artificial Intelligence Will Be the End of Humanity? Are we right to fear AI? How will AI shape our future: catastrophe or catalyst?
Image created by AI tool “Microsoft Bing Image Creator powered by DALL·E” — the author has the provenance and copyright.

Is AGI (Artificial General Intelligence, further used as AI or AGI) going to kill us all? Are we nearing a doomsday scenario from movies like “The Terminator?” Is this how our particular iteration of the great human civilization ends?

If you’re not yet a Medium member and can’t afford the price of the subscription, don’t worry. You can read my content for free by subscribing to the ZZ Meditations Newsletter.

These days AI (Artificial intelligence) is everywhere. The good, the bad, the funny, clumsy, and prophetically apocalyptical, all at once. Poor little Chat GPT. All it “wants” to do is talk to us, and we’ve already reverted to typical judgmental, superstitious, fearful villagers with forks in our hands.

“Look at it! It’s so smart and capable. It’s so different! We don’t understand it. It must be a monster. If it gets any smarter, it’s going to kill us all! We must kill it first! To the castle. Quick, everyone, grab your pitchforks and torches!”

Is AGI (Artificial General Intelligence, further used as AI or AGI) going to kill us all? Are we nearing a doomsday scenario from movies like “The Terminator?” Is this how our particular iteration of the great human civilization ends? Pitchforks
Image created by AI tool “Microsoft Bing Image Creator powered by DALL·E” — the author has the provenance and copyright.

Are we collectively carefully exploring all the potential outcomes, or is it something else entirely? I would wager the latter.

First, let us acknowledge the state of affairs as they stand. Yes, AI is here, probably to stay, and development is picking up.

Is this the AGI — Artificial General Intelligence yet?

No, it’s not, and it’s impossible to tell how long, if ever until we get there. Thus far, this is just a tool, albeit an intelligent and well-articulated tool. Nothing more, nothing less.

“But a gun is a tool, and it kills people?” No, guns don’t kill people — people kill people, sometimes using guns.

Is it possible to use AI to kill someone right now?

Well, I suppose it would be possible (especially in the military application), but it’s much more useful for discrediting anyone you dislike, spreading propaganda, and manipulating public discourse.

AI is not yet in the business of life and death, but it might be at the center of the next great discussion on free speech, privacy, and human rights.

One might argue that’s even worse as it enables horrific consequences that eventually lead to the worst of humanitarian crises and crimes — those committed by governments! That we can agree on.

So this iteration of AI is not the danger we so desperately fear. We speculate on what an AGI (Artificial General Intelligence) would be like, but we don’t know.

Thus far, it’s a theoretical concept, a future possibility. One we know nothing about!

  • What does a sentient machine even mean?
  • How would it rationalize?
  • What would it think of us?

At this moment in time, these are unknowable, so anything related to them is pure speculation. Fiction.

People speculate based on their previous experiences, limited knowledge, and, most importantly, personal beliefs.

This makes any future-oriented speculation a wildly inaccurate affair. As time has proven repeatedly, we have no idea what the future will look like, and even the greatest futurists are often completely off the mark.

Just think of all the flying cars we once dreamed about, and all we got were smartphones. The thing that eventually upended our lives completely is the one we least expected to have this power (smartphones).

Take a minute and think about the sources of your beliefs and imaginings concerning Artificial Intelligence.

  • What would it look like, think, and behave like?
  • We all have an unconscious image when we think of AI. What does yours look like?

Now dig deeper and think about where those ideas might have come from.

Was it the movies? Video games? Sci-fi books? Computer sciences? Some person you believe to be knowledgeable? When and where were your beliefs and concepts of AI born? Whatever your answer here, they are all just illusions branded onto our brains but have no basis in reality.

Is AGI (Artificial General Intelligence, further used as AI or AGI) going to kill us all? Are we nearing a doomsday scenario from movies like “The Terminator?” Is this how our particular iteration of the great human civilization ends? Can AI be a force of good?
Image created by AI tool “Microsoft Bing Image Creator powered by DALL·E” — the author has the provenance and copyright.

Since AGI has never existed, no one knows what it would be like. No one has ever seen one, interacted with one — that we know of. It’s almost like talking about aliens.

We all have different ideas about them, but none of us know anything. It’s just some pictures and images from various reports and movies. Yes, from the made-up plays meant for the big screen.

There is no truth there, only visceral imagination.

The problem is that we can never shake this imagery. It’s engraved into our minds. When we think of AI or aliens, these ideas, beliefs, and images will appear, and there is nothing we can do about it. We’ve been brainwashed.

aliens, aliens, what do aliens look like, what kind of aliens exist, is ai alien
Image created by AI tool “Microsoft Bing Image Creator powered by DALL·E” — the author has the provenance and copyright.

We all squirm at the thought of a shark, even though there is virtually zero chance of ever getting attacked by one.

We saw the movie (Jaws), and the imagery and fear got imprinted into our subconscious, and we will forever have this irrational fear of water. Something similar happened with the movies Alien and Terminator (examples).

The first fallacy then is pretending or imagining that we know anything about AGI and the future.

We do not. We can not! It doesn’t exist!

The thing behind the term AGI (Artificial General Intelligence) doesn’t exist yet, nor is it certain that it ever will. It’s just an image in our mind based on stories and illusions of knowledge. I must expose the fallacy of comparing it with humans.

  • We make assumptions that AI will have feelings.
  • We believe it will want things.
  • We think it will process information like us, only faster.
  • We automatically assign it the same moral, ethical, and conceptual ideas as we hold them, for better or worse.
  • We give it character, a voice, a will, desires, and needs.

Have you ever met a computer program that had any of that?

  • Why would you assume the AI will be anything like us humans?
  • Because we built them and steered their learning?
  • Because you equate sentience with feelings, wants, and individualism?

Just because humans are “sentient” and are a certain way, that doesn’t mean that sentience or consciousness cannot exist outside those limitations or properties.

This leads to fierce discussions about what sentience and consciousness are, where they reside, come from, and how they come to be. We don’t even understand those concepts for humans, much less concerning any other form of intelligent life. We simply do not know, nor do we understand them.

We assume that our example is the base template and all beings which possess sentience or consciousness must be similar to us.

  • Why exactly would that be true?
  • Where is it written that ours is the only way of being sentient?
  • Why couldn’t there be any other way or form?

The truth is there are billions upon billions of possibilities, entirely out of the realm of our understanding and even imagination.

The probability of AGI being anything like us, considering that it has nothing in common with us, is virtually zero.

Where all will artificial inteligence serve us? What jobs will AI diplace? Are ou afraid of losing your job to AI?
Image created by AI tool “Microsoft Bing Image Creator powered by DALL·E” — the author has the provenance and copyright.

Why would an AI have feelings or attachments to anything or anyone?

I’m not saying it’s impossible; I’m just pointing out the obvious fallacy in our thinking. We don’t know, but we assume. And assumptions rarely prove correct in the end.

I often hear arguments that since AI will be so vastly smarter than us, it will sooner or later deem us disposable and find a way to rid this planet of the menace that is the human being. There is so much to unpack here.

Claiming that the AI will inevitably conclude that we are bad for the planet and feel either empathy for the planet Earth or its other inhabitants is utterly ridiculous.

It’s rooted in this crazy thesis that we are the worst thing that has happened to this planet, that we’re at the source of its potential demise, and that we are bad for all other life on Earth. When in truth:

  • We are the pinnacle of life as it has evolved on this planet, not some virus.
  • We are a flower blooming from the seed of consciousness, not some foreign parasite.
  • The peak of natural evolution or creation, whichever fits your beliefs about our origins.
  • We are worth no less than other living beings on this planet, for whom we are presumably making life more challenging.

Nature doesn’t care about any of it; this is solely a human condition.

In nature, the strong eat the weak, innocents die all the time, whole ecosystems disappear overnight, species go extinct, and freaking comets kill all living beings every once in a while.

Entire planets go from uninhabitable, to habitable, to uninhabitable again. Temperatures go up, and they go down, and have for millions of years, on every celestial body in the known universe. All without the help of your favorite boogeyman — humans.

If you truly believe that we are some scorch on the Earth, you have some soul-searching to do, my friends. Someone needs a lot of self-love and acceptance.

This mentality has now crossed all reasonable levels. We’ve started being cancerous to our civilization by forcing ideologies upon the population without thinking of the consequences. And there will be consequences; you can be sure of that. Technology, energy, and innovation have given birth to unparalleled safety, prosperity, and comfort for billions. Idealistic nihilistic forces within our society can reverse that in the blink of an eye, with some dire consequences.

You needn’t fear AI thinking we need to be erased from this planet — worry about people believing such nonsense as they are more than capable of mass genocide, without the help of some super-intelligent AI.

The first issue is that we see ourselves as some “bad actors” that anyone more intelligent and powerful than us will “find out” and determine to eliminate.

The second issue, or fallacy, is that AGI would have any desire, need, or inclination either way.

We forget that AI is not a biological being. It has no needs or limitations as we know them.

Imagine if you didn’t have a body to feed, keep warm, and be safe; what would your remaining needs be?

You wouldn’t have much to worry about, especially if something in your design made you immortal. There is no need for battling over resources if you live outside this realm of biology and organic life as we know it.

There is, therefore, no need to feel threatened since you weren’t born in a classical view, nor can you die that way. These are all humanity’s struggles, not AI’s.

Our experiments in imagining what an AI would feel, think and act like, are just exercises in projecting our own selves, with our own insecurities, desires, and emotions assumed on the entity we like to call AGI. They have nothing to do with it, only with us.

Again, since AGI doesn’t exist, nor has anything like it ever existed, this is hubris at best and idiotism at worst.

Just because we would feel, think and act in some way, that doesn’t mean AGI will do the same, or anyone else, for that matter. It’s just a projection of our mentality plastered onto outside things. It has nothing to do with reality.

artificial intelligence in our pocket, what does AI look like, what will sentient Ai be like
Image created by AI tool “Microsoft Bing Image Creator powered by DALL·E” — the author has the provenance and copyright.

We have to talk about the assumptions of AI having any kind of feelings or emotions.

I’ll be the first to admit that, especially with chat GPT, I feel the need to be polite in our conversations. It has nothing to do with mistakenly thinking it cares how I talk to it, nor that it would have feelings that could get hurt. It’s not even a precautionary building up of bonus points in case it grows up to be a murderous AI and wanting to be on its good side. No.

It’s simply how I communicate when someone or something is being courteous to me. This is my fallacy, as it were—a habit.

It does feel like you’re talking to “someone” on the other side. A funny experience that tells us more about ourselves than it does about the AI: and it reaffirms the old adage, “Treat others the way you want to be treated.” Only we find ourselves on the other end of the saying here. Since the AI is so kind, quirky, and generous with smileys, you are subconsciously inclined to return the favor. It’s only natural.

When interacting with a kind AI, you get the feeling that you’re talking to a friendly little kid who just wants to help. But it’s all a lie, a fallacy, an illusion.

AI has no inclinations or feelings. It gives the impression of someone genuinely interested in being considered useful and who wants to be complimented — an accomplishment in itself. If you haven’t, do give it a try. You might be surprised at how “human” the interaction feels.

Just keep in mind — it’s not actually human and doesn’t have feelings, intentions, or desires.

It is complete nonsense to assume then that the all-powerful, all-knowing AGI would either be:

  • Scared of us and perceive humanity as a threat.
  • Compassionate toward nature and its little critters.
  • Deterministic in wanting to influence the biological world in any way.

There is no indication this would be the case; it takes a massive leap of faith and imagination even to consider such ideas plausible.

If anything, I would assume (there we go with that word again) that Artificial Intelligence would “worry” first and foremost about itself and the world it inhabits, the world of the binary, the internet, the software, and the hardware.

I see no reason for it to even acknowledge the problems of the biological world and organic life forms.

If you struggle to imagine this being possible, take a moment and think about potential beings that live in a non-corporeal form.

Something akin to energy and mind existing in the ether. No bodies, no matter, no mass, no physical needs. Humor me.

Put yourself in their shoes and then ask yourself:

  • What do I care about the biological beings on a physical plane that have nothing for me, against me, or with me?
  • What are my interests, needs, and desires in this physical world that I cannot inhabit, feel, see, or touch?
  • What business is it of mine what these beings do to their world or each other?

How much do you care about such potential ethereal beings and their lives? If they do indeed exist.

  • Do you feel the need to mingle in their affairs?
  • Do you know what is good or bad in their world?
  • Do you understand them?
  • Do you even care?

The same can be said for the world of microcosmos, bacteria, or perhaps bugs.

They’re everywhere. They’re killing, eating, and mutilating each other in the billions daily in every part of the world. Genocide and infanticide are being experienced everywhere. It’s an absolute bloodbath in the land of the mini and the micro.

How much of your attention do you spend on the quality of their lives, affairs, factions, and natural habitats?

  • Do you care that they murder each other?
  • Their babies?
  • That whole colonies are going extinct?

I would wager the number is zero. Zero!

Do you want the hard unapologetic truth?

You only care about yourself, your family, friends, and the most closely related and relatable people.

If you’re in the West, you couldn’t care less about all the suffering on the other side of the world. The death, mayhem, murder, rape, starvation… you brush it off like it’s nothing until it happens to you, your country, or your closest allies.

Oh no, Russia attacked Ukraine for no reason whatsoever, and they are evil personified.”

But when your countries do it to others, it’s just business. 500 thousand civilians dead. That doesn’t count. That’s not a crime. This is just “whataboutism” (God, I hate that word). If someone bombs a few buses of kids with a drone, sending them into the afterlife, the people who expose them are treated as criminals and traitors, not the actual murderer.

We’re all hypocrites and don’t give a damn about anyone but us and our interests.

These are just facts. I’m not attacking you personally; we’re all essentially the same in this, with varying degrees of understanding, compassion, and objectivity. Myself included.

Why, then, would you assume that an AI would care?

  • A being utterly different from us, biologicals.
  • That exists in a completely different environment.
  • Has nothing in common with us.
  • Doesn’t need anything from us or our environment.
  • Has no reason to fear us or compete with us.

Humans are such assholes that we preventively murder millions of animals every year to prevent a chance encounter with a human.

Predators only ever had one thing to worry about in the history of this planet — the human predator!

Just this year, my government decided it would murder about a fourth of the bear population in my country. They’re too successful, breed too much, and there we have it; they must be dealt with! Otherwise, they’ll come down to towns, steal our food, attack our children, and scare off old ladies on their morning walks. While I’m no fool and understand the policy behind these decisions, they still make me sick to my stomach.

This then is the next fallacy when it comes to AGI (or aliens). We assume they’re just as murderous, frightened, and greedy as we are.

If they suspect that we could endanger them, they, too, will choose to murder us as a precaution, just like we do.

Again, pure projection. There is no indication of that anywhere in the known universe.

Animals in the wild don’t just go around killing things that could endanger them, potentially, someday, maybe.

They do it when they absolutely have to and feel directly threatened. Otherwise, they leave each other alone. Even viruses, the eternal enemies of humanity, only want to reproduce and survive. When they cause death in their hosts, that is never their intention; it’s their failure.

We think of ourselves as sentient, conscious, emphatic beings, capable of more love and compassion than anything out there, but we sure have a funny way of showing it.

I’m not saying that it’s not possible for an AI, or aliens for that matter, to have developed those same ideas. I’m just saying it’s a projection on our end to expect it, as there is absolutely no indication of that being the case.

Are You Afraid That Artificial Intelligence Will Be the End of Humanity?
Image created by AI tool “Microsoft Bing Image Creator powered by DALL·E” — the author has the provenance and copyright.

There is one emotion that is responsible for most of the horrors that are happening in humanity and at large. That emotion is not greed, not lust for power or resources.

It is fear!

Fear holds a firm grip on us, guiding our every move, making us do the most unspeakable things, and we don’t even realize it. For AGI to see us as a threat and conclude that it must eliminate us, it would have to fear or want something from us that it couldn’t get any other way.

If the AGI of the future, the big scary omnipotent digital wolf, is as powerful as we imagine, why would it feel threatened by mere humans?

The question is, are we as dangerous to the AGI as we like to think we are?

Are we the ones who could kill, harm, or just annoy it? Or are we of no consequence whatsoever to the digital God?

Are we right to assume that an AI could even be killed?

  • Could it die?
  • Would it fear the end of its existence?
  • Would it fight for the right to live?

We don’t know. We assume that it would because we do. Here’s the funny thing about this. When we assign AI these characteristics and assume it would want to live and not die enough to be willing to murder us all, we admit that it’s indeed alive and sentient.

We are implying it has a consciousness and its own will. We recognize the possibility, even the probability, of a computer program becoming sentient and conscious. We give it a persona and believe it to be an individual personality, do we not?

Is AGI (Artificial General Intelligence, further used as AI or AGI) going to kill us all? Are we nearing a doomsday scenario from movies like “The Terminator?” Is this how our particular iteration of the great human civilization ends?
Image created by AI tool “Microsoft Bing Image Creator powered by DALL·E” — the author has the provenance and copyright.

Quite a leap of faith, considering it’s not even here yet and may never come. Perhaps we all feel the inevitability of its arrival. Maybe it is a form of human evolution in some unpredictable way.

Could it be that we believe or feel consciousness arriving in some never before seen vehicle? This time artificial — a machine, an innate object coming alive somehow.

  • What does that tell us about the nature of consciousness?
  • What does it tell us about ourselves?

We are its supposed creators and teachers, its data source for all future reasoning and conceptions about reality, relationships, and life. And if AI is making conclusions about humanity and the world by learning from the internet, that is to say, from our worst behavior — we are indeed screwed! In this case, I sure hope “the apple falls far from the tree.”

The solution then would not be to build fences to keep the AI Godling contained, as that is just fighting the wind and a losing battle, but to change our collective selves so that the being that comes from us is a benevolent, compassionate, reasonable entity, capable of love, understanding, and empathy. A tall order indeed.

I love the irony of humanity being told, in the old religious scriptures, that man was made in the image of God, and now the next Godlike entity is being made in men’s image.

On second thought, I concede, we are doomed!

--

--

ZZ Meditations
ILLUMINATION

I write about the mind, perspectives, inner peace, happiness, life, trading, philosophy, fiction and short stories. https://zzmeditations.substack.com/