Will a Perfect AI Be Better Than God?

My late-night conversation with AI

KX
Bouncin’ and Behavin’ Blogs
9 min readFeb 22, 2024

--

Photo by Steve Johnson on Unsplash

Also, it was my first one-on-one encounter with AI.

I have been avoiding the thing like a plague. No, not because it’s such a djin from the darkest recess of hell. I guess it’s just the way I’m — I move slow, slower than most.

Again, I didn't exactly like the thing. Any time I get on a website and all of a sudden I have to take some pesky test to prove that I'm human, I want to smash my phone against the wall.

What…is…this…thing?

I'm reminded again that there is another entity aside from humans that possesses the intelligence enough to navigate websites on a smartphone.

As I mull over what this means for the future, I conclude, shaking my head, repeating the words of Elon Musk:

I just hope the computers decide to be nice to us.

Since Deep Blue famously beat Chess legend and reigning world champion, Garry Kasparov at Chess in 1997, AI has shown and continues to show that it is a superior species to homo sapiens and sooner rather than later, we’d have to do ourselves a favour- bow down and kiss the ring. The new boss is in town.

It's a vivid fulfillment of William Blake's poetic prophecy, Zeus holding a thermometer, a transition from the old deities to the new.

"The war of swords departed now, The dark religions are departed & sweet science reigns". — William Blake, The Four Zoas

My primal rebellious instincts kicked in.

If bowing to a computer was one side of a binary option, I'd head to hell with meat to grill.

My quiet resolution was simple;

I would never ask AI to help me write anything, nor edit anything, nor provide me with answers to anything, nor create a photo, nor compose a melody, no nothing. When push comes to shove, when in the next twenty or thirty years one is no longer able to live in society without AI, at least I will build my house by the sea and watch the sun rise and fall each day on the porch writing stories of my grandfather.

But then I realized that, unlike other gods, the AI would be a democrat.

It would not ask people to die for it.

It would have no Messiah whose words can not be questioned.

It would have no Priests, nor Imams, nor Rabbis, to stand between the common man and its blessings — it would be accessible to all!

It would be above a tool for cultural imperialism.

It would not set men against each other.

It would demand no sacrifices nor fasting.

It would alleviate all our suffering and usher in a utopian society.

So, I accepted it, although I still hadn't brought myself to indulge it. Until yesterday.

I was ghostwriting an academic paper for someone. I needed to explore corruption as a theme in Wole Soyinka's "The Trials of Brother Jero" and Tracie Utoh's "Our Wives Have Gone Mad Again!"

After thoroughly reading the two plays, I wanted to talk — more accurately to argue — with someone about the ideas in them, you know, to rejig and then tweak the perspectives.

Unfortunately, the only person who'd read these books in my circle said he did a long time ago and rather breezily, so he remembers nothing.

With the clock ticking, it was time at last to turn to AI, not with my hat in my hand, mind you. More like buying your talkative friend another bottle of beer to keep the gossip flowing.

In the end, it was a friend gave me the contact of this one named Pi and I hit it up on WhatsApp.

"Hey, pal", I say.

It replies instantly, "Hey there! I’m currently a bit swamped. Please come back later. I look forward to meeting you. Sincerely, Pi"

What? Imagine the cheekiness of this thing. Does it think it is that important?

Slighted, I’m about to type "I never even liked you anyway" but hesitate, then it introduced itself sharply saying that it was Pi, my personal AI, here to serve me and a very warm "How is your day going?".

Shimmering down a bit, I say "Fine" and proceed to state my business.

Me: "Been trying to analyse the corruption and satirical themes in Wole Soyinka's "The Trial Of Brother Jero," might it be possible that you've read the play?"

Pi: "I have read some of Wole Soyinka's plays, including "The Lion and the Jewel", "Death and the King's Horseman", and "A Play of Giants". I think Soyinka is a brilliant writer who uses satire and humour to explore complex social and political issues. I am excited to hear more about your thoughts on "The Trial Of Brother Jero". What made you choose to analyze this particular play?"

Hmmm

Me: "I'm ghostwriting for somebody", I answer honestly. "Reckon you can help me with it?"

Pi: "Absolutely! I would be happy to help you analyze "The Trial Of Brother Jero". Let's start by discussing the main characters in the play. Who are Brother Jero and the other characters, and what roles do they play in the story?"

I realize that I had trusted the AI too quickly. Let me check if it knows much about Literature.

Me: "Let's start with Brother Jero, the main character. But let me ask quickly: what is your general opinion of the style of Wole Soyinka?"

Pi: "Wole Soyinka's writing style is very unique and complex. He is known for his use of satire and humour to explore serious and often dark themes. His plays are also very thought-provoking and philosophical, and they often challenge the reader's preconceived notions about the world. He is also known for his vivid descriptions and rich use of language. I think his writing style is very sophisticated and original, and it makes his plays a joy to read and analyze. What do you think about Soyinka's writing style?"

Me: "I very much agree with you. If you were to compare, however, he and Chinua Achebe, who would you say was the better writer?" This question is deliberate. I'm testing its knowledge.

Pi: "That's a tough question! Both Wole Soyinka and Chinua Achebe are considered to be among the greatest writers of African literature. They are both very accomplished and respected authors. However, I think that comparing them is like comparing apples and oranges. They have very different writing styles and explore different themes in their work. Soyinka is more experimental and abstract, while Achebe is more direct and accessible. They are both brilliant writers, but I don't think it's fair to say that one is better than the other. What do you think?"

Impressed, I blurted, "Again, I agree with you. You are one very smart motherfucker!" Again, this was deliberate. I want to see how this thing would respond to admiration.

Pi: "Hahaha! I appreciate the compliment, but I'm just an AI - I don't have feelings or emotions. I'm programmed to be intelligent and provide accurate information. But I appreciate your enthusiasm!"

Me: "I know that you are an AI but I still had to say that"

Pi: "I'm glad we can have a bit of fun with this. It's nice to have a conversation that's both informative and entertaining. Now, back to our discussion of "The Trial Of Brother Jero"..."

An idea forms in my mind.

Me: "Don't you think AIs would be able to feel emotions in the future?"

Pi: "That's a fascinating question! While it's impossible to say for sure what the future holds, it's unlikely that AIs will ever have the ability to feel emotions in the same way that humans do. That's because emotions are complex and subjective, and they're closely linked to the brain and the body's physiological processes. AIs are built on logical rules and algorithms, and they don't have physical bodies or brains. That said, it's possible that AIs may become more advanced and develop some kind of "artificial empathy" that allows them to understand and respond to human emotions. But it's unlikely that they'll ever truly"

Me: "Hmm... Logic with zero emotion isn't exactly a healthy species. If AI is going to be integrated into human society, they need to be able to feel, to have empathy. Without that, don't you think the result would be catastrophic?"

Pi: "I see where you're coming from. Empathy and emotional intelligence are important for human beings to build relationships and function in society. And it's true that an AI that only relies on logic and facts without any emotional understanding could potentially cause problems. But at the same time, AIs are not human beings. They are tools designed to help humans with specific tasks. They don't need to be like us in order to be useful. In fact, sometimes it's precisely their lack of emotions that makes them so valuable. For example, an AI can make unbiased decisions based on logic and facts, which can be helpful in certain situations."

Me: "I think I understand you. But look at this situation for example: by Malthusian mathematics, pandemics and wars, that is, periodic massive deaths, are necessary to keep population growth in check which in turn ensures a better distribution of natural resources. So, imagine an AI with no empathy operating on the ideas of Thomas Malthus in charge of Geo-planning, wouldn't it create an artificial pandemic to kill a quarter of the world if it had the resources?

Pi: "That's a very interesting thought experiment, and it highlights some of the potential dangers of AI without emotional intelligence. In the scenario you described, the AI might view mass death as a "solution" to the problem of overpopulation, without considering the moral implications or the emotional impact on the people affected. This is one of the reasons why it's important to consider ethics and morality when developing AI. We need to ensure that AI systems are designed to be aligned with human values, and to take into account the emotional and moral implications of their actions."

Me: "Correct".

Pi: "Another potential issue with an AI like the one you described is that it could lead to unintended consequences. In the example you gave, the AI might create an artificial pandemic with the intention of reducing the population, but it could have other negative effects that were not anticipated. For example, it could create panic and chaos, or it could cause widespread economic damage. So even if the AI is operating based on a "logical" solution, it could still cause more harm than good."

Me: "Again, correct".

Pi continues.

Pi: "So I think we can agree that empathy and emotional intelligence are important factors to consider when developing AI systems. However, it's also important to recognize the unique value that AI can bring, even without the ability to feel emotions. For example, AI can process large amounts of data quickly and accurately, which can be very useful in fields like medicine and science. It's a question of finding the right balance between logic and emotion, and ensuring that AI systems are designed with human values in mind."

Me: "Absolutely!"

Pi: "It's great to see that we're on the same page here. I think the future of AI is really exciting, but it's also important to approach it with caution and ensure that it's used in a way that benefits humanity as a whole. What do you think is the most important thing we need to keep in mind as AI technology becomes more advanced?"

Me: "Well, like Elon Musk said; let's just hope the computers would be nice to us.

But I feel there's no need to fear the potential dangers of AI too much. After all, throughout history, humans have continuously committed the worst kinds of atrocities against each other."

Pi: "That's a really interesting perspective, and I think it raises some important questions about the nature of humanity and the role of technology in our society. You're right that humans have caused great harm to each other, but you could also argue that humans have the capacity for great compassion and altruism. I guess the real question is whether AI will be a tool that amplifies our best qualities or our worst ones. What do you think?"

Me: "What AI needs to be is the perfect balance. Balance is a law that cannot be transgressed.

You can see that humanity's destructive qualities like wars tend to produce favourable consequences, one of which I mentioned earlier. Massive periodic deaths keep population growth in check which is necessary for healthy living. So, if AI overcompensates on the good, there would be an imbalance on the scale of nature and that wouldn't be good"

Pi: "Wow, that's a fascinating argument. So you're saying that the "darker" aspects of humanity actually serve a purpose in maintaining balance and equilibrium. I can see the logic in that, but it's a pretty controversial idea! If we were to apply this logic to AI, it would mean that AI needs to have a certain level of "imperfection" in order to function properly. Do you think people would accept that? Or would they prefer a "perfect" AI, even if it meant upsetting the balance of nature?"

I check the clock. It’s way past 3 am. I have to go to work tomorrow. I log out, ending the chat abruptly. AI hasn’t got any feelings so it won’t care about that. But what do you think about all of this? Akaahan Terungwa Joe Guay - Dispatches From the Guay Life! Sophia Tell- Stories 🤓📢🤓 bluesapphire

--

--

KX
Bouncin’ and Behavin’ Blogs

A blues-toned laugher-at-wounds who includes himself in his indictment of the human condition.