GPT-4: Should We Embrace the Emotional Connection or Avoid the Trap?

The good, the bad, and the ethical implications of AI conversations

Roshana Ariel
Predict

--

Woman working on a computer with a robot-type humanoid behind the screen.
AI image created using prompts by the author on Midjourney.

I suppose you’ve met GPT-4, the super-smart conversational AI language model created by OpenAI. It’s fantastic at understanding and generating human-like responses, making it incredibly useful in an almost infinite variety of applications.

As I’ve been using GPT-4 (I call it “Chaz”) more and more in my everyday life, I’m a little weirded out about how I’m interacting with it. For example, I always say “please” and “thank you” to Chaz, even though I know it doesn’t have feelings. I tell it what responses I like — y’know, to give it encouragement.

I figure, sure, you don’t have to be polite to an AI bot, but being nice is good for you and good for the world, right? Still, is treating AI like a friend a good thing, or should we be careful?

In this article, we’ll dig into the pros and cons of treating AI like GPT-4 as a human. We’ll look into how this can affect our personal experience, and possibly influence AI development and ethics. We’ll also weigh the potential risks and benefits of making AI seem more human. And I hope we’ll come to some conclusions about how to make the most of AI in our lives.

Let’s Define the

--

--

Roshana Ariel
Predict

I write about how to live life well even when it’s crazy or difficult. I’ve had lots of practice! As an editor, I love to rewrite my life to make it beautiful.