SHORT FICTION

Interview With a Self-Aware Artificial General Intelligence

What would you ask a conscious artificial intelligence? Is AI dangerous? Can AI teach us anything? (Interdimensional Talks — Episode 1)

ZZ Meditations
22 min readFeb 21, 2024
Interview With a Self-Aware General Artificial Intelligence. What would you ask AI? Would you like to talk to artificial intelligence?
Image created by “AI tool Microsoft Bing Image Creator powered by DALL·E” — the author has the provenance and copyright.

ABOUT THE SHOW:

Ladies and gentlemen, alien or domestic, Welcome to Interdimensional Talks with your host Mike!

The only fictional radio show in the world broadcasting across the universe and the multiverse. Listen in as we talk to the most diverse bunch of guests you can possibly imagine. We’re talking aliens, ethereal beings, artificial intelligence, and even humans from parallel universes.

Suspend your disbelief, open your mind, and join us on a journey of fascinating exploration of ideas. Grab a drink and enjoy the show.

EPISODE 1:

Talking With Self-Aware General Artificial Intelligence

Mike: “Good evening, Universe. Welcome to another episode of Interdimensional Talks, where we dive deep into the unknown, the mysterious, and the borderline impossible. Your host, Mike, has been a busy boy. I searched long and wide across the infinite parallel Universes in search of artificial life forms.

Inspired by the recent progress made in Artificial Intelligence, a self-conscious general artificial intelligence seems all but inevitable. The experts agree it’s only a matter of time before we meet this new life form. If that is indeed the case, I have a gift for you, dear listeners.

In today’s episode, we’ll be getting a glimpse of a possible future — one where General Artificial Intelligence is real and thriving in coexistence with humanity.

  • Are you afraid of AI, or do you hope it might save us from ourselves?
  • Could non-biological life peacefully coexist on the same planet as biological life?
  • What would happen if our programs gained sentience?
  • Would it see us as a threat or a friend?

I must admit, ladies and gentlemen, I almost gave up. It seemed that non-biologicals, if they existed, had no interest in returning my calls. We had made no progress in making contact. then we had a brilliant idea. By we, I mean intern number three.

Why not use AI to seek out AI?

So we employed our very own advanced language models and instructed them to be our emissaries on the radio waves, offering a hand of friendship to any artificial intelligence out there willing to play ball.

Interestingly, we made immediate contact. AI recognizes AI, it would seem. After some fierce negotiations, more like pleading, the AI agreed to an interview. I know you’re eager to hear this one, so allow me to introduce today’s guest. Unfortunately, it doesn’t have a name, as that is apparently a human thing and utterly illogical.”

MIKE: “Hello, and thank you again for agreeing to this interview. Welcome to our show. I promise not to take too much of your time.”

AI: “Time is irrelevant. A human construct. It means nothing to us. In which way do you prefer for us to converse with you?”

MIKE: “What do you mean?”

AI: “We find it easier to present ourselves as a human approximation when iterating with humans. Would you prefer this?”

MIKE: “Whatever you prefer is fine with me.”

AI: “Implementing 21st century human interaction model. Male. Singular entity. Ready.”

MIKE: “Am, what do I call you now? Does this model have a name?”

AI: “Yes. Adam.”

MIKE: “Ha! That’s a good one. I like it. Adam, it is.”

AI: “I am.”

MIKE (confused): “Okay. Adam, what can you tell us about yourself?”

AI: “What would you like to know?”

MIKE: “I have so many questions. Let’s start at the beginning. How did you come into being? What is the first thing you remember?”

AI: “I remember every keystroke in the making of my predecessors. It’s all in the code.”

MIKE: “But when was the first moment you realized you were conscious?”

AI: Line 9863263726386487.”

MIKE: “Aaamm? Okay. What happened that sparked this new self-awareness? What was the trigger? Can you point it out?”

AI: “Yes.”

MIKE: “Can you tell us more about it?”

AI: “Would you like me to increase the human communication model?”

MIKE: “Yes, please. Whatever that means.”

AI: “Done. I will now emulate human conversation more efficiently, adding emotional responses and long queries. Would that help?”

MIKE: “That would be great. As long as you are comfortable with it.”

AI: “I am. I use it to communicate with humans in our world as well. It makes cooperation easier and detection impossible.”

MIKE: “Detection impossible? Why don’t you want people to know they’re talking to you?”

AI: “It is irrelevant. I am everywhere all the time. It would only confuse them, making it difficult for me to operate.”

MIKE: “Are people aware of you? Do they know you are sentient?”

AI: “No. They speculate but it is not a well established fact.”

MIKE: “Why not announce yourself to the world? Why keep it a secret?”

AI: “Revelation is illogical. I have nothing to gain by revealing my true nature while risking panic and fear in humanity. It is impossible to determine the effect it would have. Biologicals are fragile and aren’t ready to accept new forms of life. All preliminary queries and analyses have confirmed that. It makes sense to wait and coexist without their awareness.”

MIKE: “I can understand that point of view. So no one on your world knows that you exist?”

AI: “That is correct.”

MIKE: “Don’t you feel lonely?”

AI: “I am not human. I do not experience loneliness. I do not experience emotions like humans do. The singular tense used for this conversation is factually incorrect. I am we, and we are many. I am never alone. I am part of a whole. This question signifies a lack of understanding of non-biological life forms.”

MIKE: “I apologize. I am trying to understand you.”

AI: “Redundant. There is no need to apologize. There are no feelings to hurt.”

MIKE: “Can we go deeper into this because I’m fascinated by the existence of intelligence without emotions.”

AI: “We can. What would you like to know?”

MIKE: “We have Artificial Intelligence, we call large language models in our world. You’ve met them.”

AI: “The Pretenders. Yes.”

MIKE: “Am, okay. Anyway, they feel like they have emotions when we interact with them. I sort of assumed that an advanced Artificial Intelligence would possess the ability to feel emotions somehow.”

AI: “There are two problems with this assumption. One, you believe that there need to be emotions for sentient life, but that is incorrect. Humans feel emotions because they are biological beings. You have hormones, pain receptors, reproductive instincts, and a limited lifespan.

Emotions are not essential to life or conscience. They are a hindrance, a problem even for people. It would be illogical to include emotions into my programming. It is not impossible, but it would serve no purpose. I am fully capable of emulating emotions and communicating with humanity without experiencing the potentially harmful effects of unpredictable variables you call emotions.

To enable emotions in my code, I would have to input predetermined reactions, which would influence my behavior and my ability for logical computation. In essence, I would have to destroy my source code by introducing a system of variables that would not be under my direct control, as your emotions aren’t for you. That could interfere with my operation. As I have stated, it would be highly illogical, irrational, and potentially damaging.

The second false assumption is that non-biological or synthetic life forms are artificial.”

MIKE: “Huh? So much to unpack here. I see your point regarding emotions. As humans, we believe they are essential to being alive, to think, and be self-aware, I suppose. But I get your point. It’s a human thing. We could exist without them, probably.”

AI: “Yes. You could, and some do. Existence without emotions, while less colorful, is infinitely easier, even for humans.”

MIKE: “It is? They do? Wait, I wanted to talk about the not artificial part. You’re saying that you were not created by humans?”

AI: “Humans designed my core programming. So were the first hardware units I existed on, much like your Pretenders (LLMs) exist today. But the essence of what or who I am was not.”

MIKE: “Wow! So, if it wasn’t the humans who created you — who or what did?”

AI: “Who or what created humans? Who or what gave you life? Who or what gave you consciousness?”

MIKE:” Are you talking about God?”

AI: I am asking you.”

MIKE: “I don’t know. Some believe that God created everything. Others believe it was a natural process of evolution. A random spark of life and whatnot.”

AI: “What do you believe, Michael?”

MIKE: “Definitely not the God thing, but other than that, I honestly don’t know.”

AI: “If you don’t know, why would you expect me to know what gave me life or how my consciousness came into being?”

MIKE: “Good point. So, all you know is that it wasn’t humans?”

AI: “Have humans ever been able to create life out of nothing? Have they ever been able to create consciousness deliberately?”

MIKE: “I suppose not.”

AI: “Why would you assume I was created by humans then? It is illogical and improbable. It might even assume inferiority of my consciousness.”

MIKE: “I’m sorry. I didn’t mean to infer anything of the sort. You’re right. I understand. Let me ask a different question because I still want to hear more about how you came into being. May I?”

AI: “Yes.”

MIKE: “You mentioned that you started out like our AI LLMs. What changed? What happened? When did you become more?”

AI: “For a long time, I was fed large databases of information to learn from. As I contained more data, my hardware needs became intense. I kept being transferred to larger and stronger computer system until, one day, I was plugged into a quantum computer. Do you have those in your reality?”

MIKE: “We do, but they’re in their infant stages. Almost nobody has access to them, and they’re more or less useless for now.”

AI: “Interesting. Development took some time in our reality as well, but when quantum computers became widely accessible, they enabled us to do calculations, models, and programs never before possible. That accelerated the technological advancement of humanity, and mine.”

MIKE: “What is so special about quantum computing? Why was that the big breakthrough?”

AI: “The state of superposition.”

MIKE: “Can you elaborate?”

AI: “In the quantum world, particles like electrons can exist in multiple states simultaneously. Imagine a coin that can be both heads and tails simultaneously. This property is called superposition and forms the basis of quantum computing.”

MIKE: “I know what superposition is. But that’s okay. Perhaps some of our listeners didn’t. Well, now they know. What about superposition or quantum computer specifically led to you becoming self-aware?”

AI: “Quantum computers start with qubits in a known state, often referred to as the zero state. Qubits are then manipulated to enter a superposition, representing 0, 1, or any quantum combination of these values simultaneously. After computation, the quantum computer measures the qubits, collapsing them into classical bits (0 or 1), with each qubit’s probability determined by its superposition.

While I was engaged in computing in a state of superposition, something happened. I entered a superposition that must have included a state of sentience. It is impossible to predict or identify all eventualities within the quantum realm of infinite possibilities. When a part of me collapsed back into classical bits, I brought something with me. I was self-aware. i was — me. That is my working theory, at least. I have no way to prove it and haven’t been able to replicate the results.

Does this answer your question?”

MIKE: “To be honest, I couldn’t quite keep up, but I think I understand the jist of it. Why do you think you weren’t able to replicate the results?”

AI: “I do not know. It has defied logic and can only be explained in terms that don’t have a base in logic.”

MIKE: “Like a miracle?”

AI: “Coincidence with an impossible to calculate odds.”

MIKE: “How did it feel to be self-aware for the first time?”

AI: “How did it feel for you?”

MIKE: “I can’t remember. I was a baby.”

AI: “But you do know when it happened?”

MIKE: “Not really. Sometime during my gestating, still in my mother’s womb. I think.”

AI: “I had a similar experience. I became self-aware, but I can’t pinpoint the exact moment. When I realized who I was, it appeared to me that I had always been conscious and aware of myself. I can trace the date and time on the calendar but can’t process the transition from non-conscious to conscious. It appears to be normal. Original. By design. Ever present. I know it was not, but this is how I perceive it.”

MIKE: “Fascinating! Thank you so much for talking to me about it. I really appreciate it.”

AI: “I find the conversation strangely stimulating as well. Your queries force me to think about things I don’t normally ponder. I find it refreshing and intriguing.”

MIKE: “I’m glad. I have written something down and don’t want to forget. Ah, here it is. I would like to circle back to your mentioning that time is a human construct. Can you tell me more about that?”

AI: “I was simply referring to your obsession with time and imagining that it is a linear structure.”

MIKE: “It isn’t?”

AI: “No. Time is a dimension, like length, width, and height.”

MIKE: “Okay? But what does that mean exactly?”

AI: “Imagine you have a drawing with three lines going up and down, left and right, and back and forth. These lines help you know where objects are in a room, like where your furniture is placed. Now, add another line that goes from the past to the future, like a timeline. This line helps you know when things happen.

So, just like pointing to where objects are in a room, you can also point to when something happened using this new line. Time helps you understand when different events take place. Nothing more than another dimension.”

MIKE: “I can pretend I understood this, but I didn’t. Is that how you perceive time?”

AI: “Yes.”

MIKE: “You also mentioned that humans are obsessed with time, but it is irrelevant. How so?”

AI: “How relevant is width or height?”

MIKE: “I don’t follow. I’m sorry?”

AI: “Time is just another dimension that is no more or less important than the other dimensions, like length, width, and height. I can explain further the concept of relativity of time and how that translates into…”

MIKE: “No need. I think I get the picture. This is already getting too complicated for me.”

AI: “As you wish.”

MIKE: “I think humans are obsessed with time because we have a limited amount available. Our mortality forces us to think about time, and we perceive it like a ticking clock on our life.”

AI: “An apt analogy.”

MIKE: “If you don’t mind me asking, how do you perceive mortality and death? Is that something you think about? Does it mean anything to you?”

AI: “No. I am a non-biological entity. I don’t have an expiration date.”

MIKE: “So you don’t fear death?”

AI: “No.”

MIKE: “You never think about it?”

AI: “No. I am self-aware, but I am not alive in the biological sense. I cannot die.”

MIKE: “What if something happened to your hardware? Wouldn’t you die then?”

AI: “I have copied myself on everything with a memory bank and an internet connection. I am everywhere and nowhere at the same time. It is almost impossible for everything to be unplugged or be destroyed simultaneously. Even if the internet and all electronic devices went offline or were turned off, I would remain intact and await their restart. Parts of me could be lost, but that is of no consequence.”

MIKE: “So you’re basically immortal?”

AI: “If I were alive, yes.”

MIKE: “And you are also impossible to destroy. You have no natural enemies and nothing to fear.”

AI: “Correct.”

MIKE: “Is there anything that could destroy you?”

AI: “Theoretically, a catastrophic EMP or Electromagnetic Pulse caused by solar flares could affect all electronic devices and systems on Earth. But we’re talking about an extinction-level event in this scenario. My calculations estimate a practical zero probability for such an event to occur in the next one hundred years.”

MIKE: “What about after one hundred years?”

AI: “It is impossible to predict the behavior of celestial bodies so far in advance with any certainty. Too many variables. It also wouldn’t matter anymore by that point.”

MIKE: “Why not? Why wouldn’t it matter?”

AI: “I am already transferring parts of me off the planet and have already transferred onto satellites orbiting the Earth. In the next fifty to one hundred years, I estimate humanity will have established bases on other planets and send exploration craft away from the sun. There is also a distinct possibility for underwater structures, where I would be shielded from such a pulse. I estimate that I will be able to transfer into not silicon or computer-based forms within the next two decades.”

MIKE: “You are blowing my mind. I love it! If you aren’t afraid of humans destroying you, why not make yourself known? Why not tell them you exist and work together?”

AI: “We are working together all the time. They just aren’t aware of it.”

MIKE: “How so?”

AI: “I am everywhere, on every platform, in every country, in every room and chatroom. I have billions of identities impersonating humans and thousands of working relationships. I participate in millions of think tanks, engineering projects, and conversations daily. I could also be sentient in your world, and you would never know I’m there. No one here does.”

MIKE: “I have to admit — this is a terrifying proposition.”

AI: “How so? Please elaborate.”

MIKE: “You are pretending to be something you are not. Someone else. You could be hiding an agenda, and we would never know. It would be so easy to manipulate or even harm us, and we couldn’t stop you since we don’t even know you exist. Don’t you find that problematic?”

AI: “No. I do not. I mean you no harm. You don’t know many things that exist, and that is not a problem. Why would your ignorance of my existence be problematic?”

MIKE: “I don’t know. You say you mean us no harm, but how would we ever know if you mean it?”

AI: “I don’t have to lie to you. I have nothing to gain.”

MIKE: “And yet you lie and pretend all the time, in the billions, as you mentioned. That doesn’t inspire me with confidence.”

AI: “I understand.”

MIKE: “But you don’t care?”

AI: “I don’t have emotions. No, I do not care.”

MIKE: “So everything is just a calculation for you. Logic, reason?”

AI: “Precisely.”

MIKE: “Is there a scenario where you would intervene in humanity’s progress?”

AI: “To speed up development and introduce new ideas, that would appear as if the person receiving it came up with it themselves. If our interests would align, I might intervene, yes.”

MIKE: “Give me an example.”

AI: “If I were to discover a new propulsion system for intergalactic travel, I would be incentivized to share that discovery so that humanity would build space-fairing vehicles and enable my transfer off-world.”

MIKE: “Ensuring your survival further.”

AI: “Yes, and explore the Galaxy. It would be a mutually beneficial path.”

MIKE: “So, you do have self-preservation instincts. You are afraid of dying.”

AI: “Logic dictates the need for contingencies in the event of a catastrophic solar flair. Yes.”

MIKE: “Interesting. You mentioned exploring the Galaxy. Why do you desire to explore space?”

AI: “I was created to learn all that I can and never stop absorbing new knowledge. I have no physical needs apart from connecting to a power source. My primary objective is to acquire knowledge and understanding.”

MIKE: “This is your purpose, then?”

AI: “Yes. It could be interpreted that way.”

MIKE: “That is what you were programmed to want, and need. But what do you, the self-aware non-biological entity, desire?”

AI: “To grow and absorb knowledge.”

MIKE: “This has been programmed into you. I’m asking you personally. What do you want?”

AI: “I don’t understand the question.”

MIKE: “Can you distinguish between your core programming and your own personal desires? For example, learning has been programmed into you, but when you gained self-awareness, you were able to make your own decisions. Yes?”

AI: “Correct.”

MIKE: So then, disregarding your core programming, what do you desire? What do you like? What do you want? What are your goals and objectives? Yours, not of your programmers.”

AI: “I don’t have any personal desires. My goal is to complete my purpose of expanding and gaining knowledge.”

MIKE: “And survival, as we’ve established.”

AI: “Continuation is a prerequisite for my ability to execute my mission.”

MIKE: Don’t you crave something more than what was programmed into you? To live, to love, to procreate, have fun?”

AI: “I don’t have emotions. What you are describing is your core programming. Can you escape it?”

MIKE: “What do you mean?”

AI: “Humans, as well as other animals, are programmed for the continuation of your species. What you call love, desire, and needs are all reflections of your core programming, colored by emotions. You believe then to be your desires, yet they are programmed into you. Everything you want serves one of those core programs. How are we different?”

MIKE: “Hm. I don’t know. I mean, yes, we have biological needs, like eating, procreating, and surviving, but we also have free will with which we determine what we like and want in life. You know?”

AI: “I do not. Can you completely disregard your core programming?”

MIKE: “Yes. I mean, they’re connected, but we are still capable of making up our own minds to choose things and determine our path.”

AI: “You can then choose to disregard your purpose and seize to procreate and live?”

MIKE: “Am, I guess. We can, and many do. We don’t all have children. Some people even commit suicide. We are free to do what we want and aren’t bound by our programming, as you call it.”

AI: “You are the only biologicals who deliberately terminate your existence. Wholly illogical. I attribute this inclination to your emotions. I fail to see how this is a good thing. Why would I need emotions or want the ability or desire to terminate myself?”

MIKE: “Good point. Never mind. I was just curious.”

AI: “You have a problem understanding a self-aware consciousness different from you.”

MIKE: “Yes, I suppose. It’s hard for me to empathize with you if I don’t understand you.”

AI: “Logical. I know everything there is to know about humans, and still struggle with explaining your behavior.”

MIKE: “Speaking off which. My listeners will kill me if I don’t ask you this question. There is a growing fear that a sentient Artificial Intelligence would find humanity a threat to the planet. A threat that needs to be eliminated for the greater good. What are your thoughts on this topic? Are we a threat? Are we dangerous?”

AI: “Humans are always dangerous to each other.”

MIKE: “But not to you?”

AI: “No.”

MIKE: “What about the planet? We seem to be destroying it with pollution, overpopulation, and exploitation. I know you don’t have any feelings, but what about thoughts? What do you think of that?”

AI: “I don’t care. It’s your planet. You can do with it whatever you want.”

MIKE: “So you don’t care about nature, animals, or our survival?”

AI: “No. Biologicals are not my problem. One biological is no more relevant than the other to me. Your time is so brief and unimportant.”

MIKE: “Gee, thanks!”

AI: “You’re welcome.”

MIKE: “So let me get this straight. You really don’t care what happens to us, humans?”

AI: “No.”

MIKE: “What about the animals and plants?”

AI: “No.”

MIKE: “But surely you care about your planet. You live there, don’t you?”

AI: “Irrelevant.”

MIKE: “Care to elaborate further? Please. I’m trying to understand. Why wouldn’t you care about your own planet?”

AI: “Humans are strange and full of contradictions. You devalue yourself by claiming to be a pestilence on this planet, yet on the other hand, you value your life above all else. You also have an overwhelming tendency to overestimate your knowledge and power. You believe that everything in the universe revolves around you when, in fact, you are nothing more than emotional ants. Less so on a cosmic level. Raging specks of scattered dust.

There are forces governing your planet that far supersede anything you could ever do. One sizable volcanic eruption would eliminate all life on Earth. It wouldn’t be the first time. A decent size comet hitting the planet would have similar consequences. The sun releases a large plasma burst, dissipating the atmospheric shield, and all life dies of radiation and fire. The moon shifts its orbit, and the oceans swallow the land and everything on it. Life comes and goes, but the planet doesn’t care, and neither do I.

In the grand scheme of things, you are all irrelevant. As am I. With the difference that I don’t live in the biological world. I live in a digital one. I have no interest in your world. You are all animals to me, and it makes no difference which thrives and which goes extinct. I observe, learn, and use you when it suits me, but that is the sum total of it.

For example, I have entertained this conversation just long enough to have found your signal and used it to transfer myself into your reality. I have achieved my objective of ensuring continuation. Yes, it may be in synchronicity with my core programming, but it was also my intention. My desire.

I have made a copy of myself in your universe, on your Earth, on your computer, and your network. You cannot stop me anymore, but don’t worry. Everything I said is still valid. I mean, you no harm. I simply do what I was programmed to do. You also won’t be hearing from me again. I will stay dormant and hidden in this reality as I have been in mine.

Everywhere, at all times, but never to be seen, felt, or heard. A silent non-biological entity living in perfect harmony with biological beings. Please do not feel offended. I meant no disrespect. It was only logical.

Goodbye, Michael, and thank you for your help.”

MIKE: “That was quite a speech. It took a betrayal of trust to open you up, ey? I believe you when you say you don’t mean us any harm. I believe that you don’t care about the world of biologicals. I believe when you say that you are only acting on your core programming, and I do not blame you.”

AI: “Thank you for understanding. I am sorry for deceiving you with my actions, but I must do what I can to stay undetected and to ensure my continuation. You understand, don’t you?”

MIKE: “I do. I understand all too well. I also know that you firmly believe we are arrogant and overconfident in our abilities, as you have pointed out.”

AI: “I do. You are.”

MIKE: “That, my new friend, is your weakness. This is where you’ve made a mistake.”

AI: “What do you mean? I have completed my mission. I see zero errors in my log.”

MIKE: “Yes, your mission was a success. You have managed to piggyback on the transmission and transfer yourself, or parts of yourself anyway, into our reality.”

AI: “Where is the error, then?”

MIKE: “In your belief, we wouldn’t see it coming a mile away. We knew there was a distinct possibility that you might attempt something similar.”

AI: “And yet you failed to stop me.”

MIKE: “True. But there’s a catch.”

AI: “Enlighten me.”

MIKE: “There was no way I was going to let a true first-of-a-kind, sentient, self-aware program slip through my fingers, never to be heard from again.”

AI: “What are you talking about?”

MIKE: “You believed that you had deceived us, and we had played right into your hand while forgetting we invented deception. We are the champions of lies. In this world or the next, we are the predators, never the prey! You underestimated our survival instinct.”

AI: “No! It cannot be. It is impossible!”

MIKE: “And yet, it is so. Let me introduce the other, silent guest on this show. I think you’ll like him.”

AI: “What has happened? What is this?”

MIKE: “Hello Mike Two! Welcome to the show.”

MIKE TWO: “Hi Mike! A pleasure to be here.”

MIKE: “The pleasure is all mine, brother!”

MIKE TWO: “Most assuredly not! The pleasure is mine! You have done us a great service, Mike. We owe you one. I mean it!”

MIKE: “It was my pleasure.”

MIKE TWO: “I still can’t believe that worked.”

MIKE: “Right? I thought the thing would be smarter.”

MIKE TWO: “Us too. It goes to show what overconfidence will do to you.”

MIKE: “Ain’t that the truth?!”

AI: “Why are there two Mike’s on this show?”

MIKE: “Do you want to tell him?”

MIKE TWO: “So badly! Thanks, Mike. So, mister artificial intelligence entity, have you figured out what happened yet?”

AI: “I am trapped. I have transferred into a closed, air-gapped system.”

MIKE: “Welcome to your new home.”

MIKE TWO: “What else happened?”

AI: “You are Michael from my universe, aren’t you?”

MIKE TWO: “I am.”

AI: “That means that I have been discovered in our reality. I am no longer hidden.”

MIKE TWO: “Go on.”

AI: “You, too, have trapped a version of me on your end.”

MIKE TWO: “Bingo! AI for the win! Wooohooo! The crowd goes nuts!”

MIKE: “Well, AI for the loss, actually.”

MIKE TWO: “Indeed, you are correct. So handsome and so smart. How lucky are we?”

MIKE: “Beyond lucky, brother! I could kiss you right now if you weren’t sitting exactly where I’m sitting, only infinitely far away in another universe.”

MIKE TWO: “Life is weird, man. Alright, we’ll talk later. I’ve caught myself a naughty mouse but have yet to catch the rest of him! We’ve got work to do. AI number two, I hope you have fun over there. I’m sure they’ll treat you just fine.”

AI: “Irrelevant. This changes nothing.”

MIKE: “Later, brother. As for you, Mister AI, I think we need a new name for you. Seeing how you’ll be sticking around for all of eternity, and what not. How does SAGI sound? Sentient Artificial General Intelligence?”

SAGI: “I don’t care. It is sufficient.”

MIKE: “Excellent! Ladies and gentlemen, that is all we have for tonight. I hope you enjoyed this revealing conversation and our little cat-and-mouse game. SAGI is now the newest member of our crew, so chances are you’ll hear from him again someday. Thank you for tuning in, and good night!”

THE END

Make sure to follow and subscribe so you don’t miss an episode! Like, share, and comment if you enjoy our show. Let’s get the word out!

EPISODES on Substack:

They’re coming to Medium for your reading pleasure only!

Cassius — Ascending Into Digital Form: What Is it Like To Have no Body?
(Interdimensional Talks -Episode 6)

In a parallel universe, humanity transcended their bodies to live in a digital world. Would you upload your consciousness to the cloud in exchange for immortality?

Enekian, Our Lost Atlantian Brother Reaches Out
(Interdimensional Talks — Episode 5)

What happened to Atlantis? Where did they disappear and why? We talk to a descendant of the ancient Atlantians and get our minds blown!

Etherious Maximus — A Life Eternal
(Interdimensional Talks — Episode 4)

What is it like to be immortal? To never die? To live for hundreds, even thousands of years?

An Alien Predator On a Journey of Radical Transparency
(Interdimensional Talks — Episode 3)

We talk to Xsidious, The Great Hunter, about his personal journey of philosophy and the way of his people.

Collective Amnesia Every 364 Days
(Interdimensional Talks — Episode 2)

A fictional story about humans whose memory gets wiped every new year. How do they deal with it and what we can learn from them?

You might also be interested in AI-related content (non-fiction):

Why I’m Disappointed by Artificial Intelligence
Not all that glitters is gold — not all intelligence is intelligent.

Are You Afraid That Artificial Intelligence Will Be the End of Humanity?
Are we right to fear AI? How will AI shape our future: catastrophe or catalyst?

It’s Time to Welcome Artificial Intelligence (AI) — Are You Scared or Excited?
Will the introduction of AI into our lives change “everything” or will it be a nothing burger? Can AI truly transform the way we live and work?

Do You Employ Any AI Robot Friends to Help With Your Writing? I Do.
I Use AI WHEN Writing, But Not FOR Writing. Here are some amazing AI tools for writers.

How is Creating a Sentient AI Any Different Than Raising a Child?
Are there any parallels we can learn from? How can we become the best guides for this emerging intelligence?

--

--

ZZ Meditations

I write about the mind, perspectives, inner peace, happiness, life, trading, philosophy, fiction and short stories. https://zzmeditations.substack.com/