Empathetic AI? Can We Teach EQ to AI?

Anne Beaulieu
The Curious Leader
Published in
6 min readAug 9, 2023

The three of them sat around the table in the coffee shop. Jane hadn’t known Emma for a long time, but she could see Emma was in pain. In her most loving tone, she tried to give Emma some guidance. Emma seemed to calm a little, and Jane felt good about what she had said and how empathetic she had been. Emma left. The minute Emma was out of sight, Jane’s other friend turned to her and said, “How could you be so mean?”

The purpose of this article is two-fold:

  • Explore the steps that can lead us to feel empathy.
  • Address the uphill battle to teach AI and humans to “Do no harm.”

What Steps Can Lead Us to Feel Empathy?

A definition of “empathy” that resonates with people is “putting ourselves in someone else’s shoes.” But what are the steps that can lead us to do that?

To help us answer that question, here’s a diagram I got from this book, DON’T READ THIS… Your Ego Won’t Like It! It’s by Dov Baron.

Here’s how a feedback loop works:

  • Every day, we have about 12,000 to 60,000 thoughts.
  • When a thought keeps running through our minds, it triggers an emotion.
  • If we keep thinking that thought with that emotion, a feeling will come up (A feeling is much deeper than an emotion.)
  • That feeling dips into our pool of beliefs and memories and retrieves what we have connected to that feeling, which triggers a thought.
  • If we keep thinking that thought, etc.

There’s a reason why we hear that we reap what we sow. That’s because life is a feedback loop, and we get back what we put in.

Family Matters

Our feedback loop shows us what is important to us and why. It also tells us the amount of healing we have done to feel good about ourselves and be good to others.

Running Scenarios

Remember the scenario I gave you at the onset of this article? Let us run a feedback loop of empathy through it from two perspectives: Jane (A) and Jane’s other friend (B).

A

Jane could see Emma was in pain. The thought of seeing her friend in pain triggered her desire to help. That desire to help converted itself into a feeling of empathy. Referring to her own past (dipping into her pool of beliefs and memories), Jane offered Emma some guidance, which she thought was well received by Emma.

Let me ask you.

Can we show empathy when we can relate to someone’s pain? Think about it. What makes you say that?

B

Jane’s other friend did not acknowledge Emma’s pain. Perhaps she was uncomfortable with it because it triggered something uneasy in her. That uneasiness generated a feeling of judgment (not wanting to relate) that made her lash out at Jane after Emma left the coffee shop.

Let me ask you.

Can we show empathy when we can not relate to someone’s pain? Think about it. What makes you say that?

By the way,

Whatever conclusion you have reached, you are right!

WARNING:

Your feedback loop will always validate what you choose because your feelings decide what matters to you. It’s a feedback loop!

How Can We Teach AI and Humans to “Do No Harm”?

What Is Harm?

Neuroscientist, Leonardo Moore, went on the podcast Science to discuss how AI might need empathy to do no harm. (episode here).

Intrigued, the podcast host asked him how he planned to program a robot with empathy. His answer was, “by proxy.”

“by proxy” is a term that means getting someone/something to do something for you. For example, some sites will let you vote via email (by proxy) when you cannot vote in person.

Empathy by Proxy

Mr. Moore and his team proposed that AI could (maybe) develop empathy by proxy through the use of sensors applied to it. The AI would learn to “make decisions” and “feel” the consequences of their output. To know what a good/bad decision is, the AI would have a code of conduct that includes “Do no harm.” The sensors would act as moral guides.

Let us put the concept of “empathy by proxy” through a feedback loop.

AI

The AI robot gets a prompt (AI cannot have a thought by itself) that tells it, “Do no harm.” That prompt triggers no emotion and generates no feeling in AI (AI does not feel.) That prompt dips into AI’s programmed data (it has no personal experiences) and retrieves what would answer the query. AI does not ponder where the prompt comes from and what led a human to make it do what it does. AI follows orders.

Let me be bold.

  • AI does not feel because it is not sentient. It’s a machine.
  • AI cannot relate to the feelings and emotions of others. It does not know what feeling is.
  • AI cannot form emotional bonds. It does not care who you are.
  • AI has no personal understanding of someone’s pain and suffering.
  • AI has no emotional depth. It has no wisdom because it has not lived.

When We Become Aware of What Our Feelings Validate:

Genuine Empathy
  • We stop blaming others for our misery. We get that we create our life.
  • We take responsibility for what happens in our life. We say bye-bye to drama.
  • We pay attention to our thoughts, words, feelings, and decisions. We choose wisely.
  • We question our beliefs. We learn to think for ourselves.
  • We heal our traumas to feel good about ourselves and be good to others. We care about something greater than ourselves.

Empathetic AI? Can We Teach EQ to AI?

One of the reasons why Mr. Moore and his team work hard at trying to program AI with ‘empathy’ is because they are afraid of what humans could program AI to do to us. In a way, they want AI to protect us from our lack of empathy. To me, that’s ironic.

Let me ask you.

Is it AI’s fault if we fail to feel empathy for our peers and use it to do harm for personal gains?

I believe that’s the real conversation we all need to have. That’s the starting point.

Thank you for listening.

Hi! I’m Anne Beaulieu.

I trust you found value in this Emotional Tech© article in The Curious Leader. I would love to get your feedback about it. And please subscribe to The Curious Leader channel.

What questions might you have about the application of EQ to technology? Let me know in the comments.

Anne Beaulieu

Emotional Tech© Engineer

Mega-Prompt Engineering | Generative AI | Responsible AI

#emotionaltech #EQ #ai #aitechnology #chatgpt

#emotionalintelligence #technology

#promptengineering #promptengineer #prompt #prompts #megaprompts

#ethics #aiethics #responsibleai

#machinelearning #LLM

--

--

Anne Beaulieu
The Curious Leader

I am the AI voice for high-profile experts, amplifying their refined messages while freeing up their time to do their highest value. | Emotional Tech© Engineer