The Silent Alarm Bells for the Future of AI

Anne Beaulieu
The Curious Leader
Published in
7 min readJan 11, 2024

In an interview with Trevor Noah (link here), Sam Altman, the CEO of OpenAI, said that safe AI is not a binary (yes/no) thing but a risk/reward attribute. That thinking alone is enough to raise alarm bells about what our future with AI will likely be.

As tech innovations get brought to market faster and faster, Artificial Intelligence (AI) is a sharp reminder of our levels of clarity and wisdom.

As we race towards a future that includes more and more AI tools, a vital question looms over all our heads:

Are we infusing enough emotional intelligence (EI) into AI guardrails before deploying AI systems?

The desire to be first in the AI race creates systems that may be super fast but lack concern for life as we know it.

4 Hidden Threats of the AI Race

This article dives into four (4) issues within the AI race and proposes emotional intelligence insights as a guiding tool to develop responsible AI.

#1 Rush to Deploy AI: Speed Over Depth

In the catfight to be first in the AI race, tech companies often put speed over depth. That means:

  • Many offered AI systems contain fewer safety measures (parameters) and have received less training.
  • Compliance worked in reverse; AI guardrails often get added after the AI system has been brought to market — usually when major safety issues get flagged and public outcry ensues.
  • Hungry for data sets, tech companies buy data pools from sources that may not have cleaned their offer: Training data sets often come tainted with racism, biases, illicit images, etc.

The rush to deploy AI systems often leads to AI systems that may be fast but are ethically immature. That boldly contradicts Sam Altman’s belief that “AI will not have the deep human flaws.”

The solution: Emphasizing emotional intelligence (EI) in AI training

Emotional intelligence in AI involves training systems to recognize, interpret, and respond to human emotions effectively.

By prioritizing EI training in the development phase, we can get an AI system that is more aligned with human values and ethics.

That involves adopting universal safety guardrails, clean data sets, global compliance standards, and deeply human-centric values/ethics such as empathy and compassion.

#2 The Challenge of Rewiring AI

AI, once coded, is known to be hard to rewire. Unlike the human brain, which has a natural agility to heal and adapt swiftly, that ability is not there in AI. AI’s hardwiring makes it hard to re-code it even after a system’s launching and malfunctioning.

A solution: Building digital plasticity into AI systems

It is possible to make AI more responsive to change. We can create AI systems that learn from empowering data before, during, and after deployment. Empowering data comes from a soulful purpose that generates accountability and well-being.

That emotionally intelligent approach allows the programming team and AI to keep improving. It also creates powerful algorithms sensitive to humanity’s needs.

You might want to know that Sam Altman dreams of the day when “AI spills out more words than all of humanity together.” But when Trevor Noah asked, “How do we teach an AI to learn beyond the limited data we have put into it?” Sam Altman answered, “We don’t know yet.” It’s a conflicted perspective, in my opinion.

#3 The Erosion of the First Law of Robotics

The first law of robotics states: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

That law (the first in three laws) was the brainchild of Isaac Asimov, one of the “Big Three” science fiction writers and biochemist professor.

As much as you may agree with the first law of robotics stating that robots shall do no harm to human beings, that law is obsolete in today’s context. AI systems are already being used in warfare (attack drones) and cyber attacks.

Some say the Third World War will be fought with sticks and stones after AI goes nuclear. That war is already here, considering that “risk-reward” can move the “safety” goalpost towards the highest bidder.

A solution: Solidify ethical boundaries in AI

Addressing ethics requires emotional intelligence. Consider this. Can we be ethical without self-awareness? Is the purpose of ethics to include all of us or exclude some of us? What are the ethics of world peace?

Solidifying ethical boundaries in AI includes implementing solid ethical guidelines in AI systems.

AI systems could be programmed to evaluate their guardrails’ emotional and ethical impacts on humankind. For example, if all AI systems were programmed to refuse to launch missiles, how would that nurture world peace?

During his interview with Trevor Noah, Sam Altman said that OpenAI sets the limits on their AI systems. However, he did say that we need to “democratize AI governance.”

To democratize means to make something accessible to everyone. Is Sam Altman talking about everyone having a say in how AI gets to function? Or does he believe that AI governance should be determined by a few more of the select few, like the military, the government, and elitist groups?

#4 The Influence of Elite Agendas

The development and deployment of AI systems are determined by the agendas of powerful groups or individuals who may or may not be ethical.

A solution: Universal AI ethics

Who are the stakeholders in the fast-paced world of technological innovations? The military? The government? The tech companies? The venture capitalists? The students who pace their learning using ChatGPT? The business owners who install chatbots on their site? Who benefits from an AI system that may be fast but lacks concern for life as we know it?

Universal AI ethics goes beyond the personal/political agendas of a few as it upholds the greater good of all. The student in Ukraine cares very much about the impact of attack drones on their country.

After witnessing the damage of AI warfare, they would likely stare at those who affirm that safety is akin to ‘risks and rewards.’ I doubt they would see the reward in seeing human flesh bits clinging to rocks.

As I write these words, I know I might get flagged for bringing up the obvious again: The imperative need for emotional intelligence in AI.

In Conclusion

Infusing emotional intelligence in AI guardrails is not just a technical challenge; it’s a moral imperative. As tech innovations get brought to market faster and faster, and we race towards a future that includes more and more AI tools, the decisions we make today show what our future with AI will likely be.

It is within us to use AI systems where technology and human values coexist harmoniously.

Sam Altman said, “A more sophisticated AI will not get fooled that easily.” Let me be clear. Foolishness is a lack of judgment akin to stupidity. It’s the opposite of discernment, an emotional intelligence skill.

When Sam said those words, perhaps he wished for an emotionally intelligent AI. If that’s the case, someone must remind him that AI does not feel and can, therefore, not relate to our humanity. That is why we must infuse emotional intelligence into AI guardrails before, during, and after deployment.

Let us create a future that is emotionally intelligent and ethically AI-sound.

🌟 Elevate Your AI with the Power of Emotional Intelligence! 🌟

In an age where AI takes center stage, How do you ensure that your technology remains in tune with the human touch?

Dive into the future with Anne Beaulieu, the foremost expert in #EmotionalTech.

With her unparalleled expertise, Anne will guide your organization to seamlessly infuse emotional intelligence (EQ) into your AI.

Don’t just keep pace with the digital era; lead it with a more genuine and human-centric experience for both your customers and team.

It’s not just about intelligence; it’s about emotion, connection, and true innovation.

🔗 Bring in Anne Beaulieu today and transform the way your organization connects and communicates through AI!

I trust you found value in this Emotional Tech© article in The Curious Leader. I would love to get your feedback. Leave a comment below. And please subscribe to The Curious Leader channel.

Anne Beaulieu

Emotional Tech© Engineer

Human-Centric AI Advocacy | Generative AI | Responsible AI

#technology #technologydevelopment #technologynews #artificialintelligence #AI #aitechnology #advocacy #emotionaltech #emotionalintelligence #ethics #aiethics #responsibleai #asimov #aiguardrails #promptengineering #chatgpt #training #machinelearning #LLM #deeplearning

--

--

Anne Beaulieu
The Curious Leader

Emotional Tech© Engineer | Emotional Intelligence, Strategic Planning, AI Integration, Mega-Prompting & Knowledge Base Building Services