Ethical AI: Unmasking the Humanity Behind the Code

Anne Beaulieu
The Curious Leader
Published in
7 min readJan 25, 2024

AI is not separate from us; it is a reflection of us.

All AI systems have been trained on human data. Faulting AI for harbouring biases/prejudice or contributing to climate change through CO2 emissions is forgetting that we are the ones who fed AI its data and hardwired it.

It is time we take a long look within to take ownership that we create AI in our image.

“AI doesn’t exist in a vacuum.”

Sasha Luccioni, a researcher who studies the impact of AI on society, said during an enlightening TEDx talk on Artificial Intelligence (link here) that “AI doesn’t exist in a vacuum. It’s part of society, and it has impacts on people and the planet.”

Saying that something does not exist in a vacuum means that the problem/issue/threat is not isolated from the context in which it can be understood and assessed.

A Curious Leader?

As a society, we have taken a stand “othering” AI — seeing Artificial Intelligence as something separate from us. But AI does not exist in a vacuum.

Researchers, programmers, venture capitalists, and more believe AI systems can accomplish tasks “better” than humans. And that, right there, is a big emotional intelligence problem.

AI is not human!

To believe that AI can outperform humans is looking at the issue from a very narrow viewpoint, that we (humans) are only as good as the tasks we do. That tunnel vision leaves the humanity aspect out of the equation. Maybe that is why society has not revolted yet against the use of attack drones on innocent women, children, and the wise. What’s your viewpoint?

Understanding Our Humanity

Part 1: The Desire for Sustainability

In her TEDx talk, Sasha invites us to ask ourselves, Is AI sustainable? She mentions that “the cloud AI models live on is made of metal, plastic, and powered by vast amounts of energy. And each time you query an AI model, it comes to a cost to the planet.”

As a researcher, Sasha proved that training a large language model used as much energy as 30 homes in a year and emitted 25 tons of carbon dioxide (CO2.) She explains how that is like “driving our car five times around the planet, so someone can use this model to tell a knock-knock joke.” Sobering.

The desire for sustainability is wired in our brains. We don’t want to survive; we want people to remember our name long after we are gone. We want to live sustainably in the mind of and (tongue in cheek) the internet. And that, right there, is a big emotional intelligence problem.

Let me ask you.

What drives a billionaire to want to become a trillionaire?

SpaceX Founder: Elon Musk

Elon Musk and Jeff Bezos seem to think that having billions of dollars is not a sustainable way of living; they never seem content with what they have. For example, Elon keeps taking to the internet, tweeting (Xing) rant after rant about how we must create more AI tools to take to the moon and other planets.

It is time we revisit what sustainability is from an emotional intelligence viewpoint.

Sustainability is the feeling we get when we are satisfied with what we have and believe we can replicate that feeling of safety in the future.

Otherwise, we would always be craving something. Craving stems from a sense of lack, and lack is not sustainable.

Therefore, a better question might be:

Can we be satisfied with the AI tools we have now and replicate that feeling of safety in the future?

When questioned about that, Mark Zuckerberg said in front of a Senate committee that the solution was to create more AI tools to control the AI tools.

An Improved Model?

It seems to me that the answer to the question, Can we be satisfied with the AI tools we have now and replicate that feeling of safety in the future? is no.

Therefore,

We must infuse more emotional intelligence into AI guardrails before, during, and after AI deployment.

Part 2: The Desire to Be Unique

If Picasso lived today, I wonder to what lengths he would go to protect his art from being used to train AI models.

And if Emily Dickinson was still around, what poem would she write about the “fair use” of internet data to train ChatGPT?

Painters, writers, comedians, and the next kid on the block are fighting to have their works protected from AI models’ training.

We fight to uphold our uniqueness.

Unique IP

Our uniqueness is our intellectual property (IP), a sort of DNA that transcends time and the internet.

Data is the new gold.

Tech companies are craving information to train their AI models. Since not everyone is Picasso or Dickinson, artistic images and written words must come from somewhere.

As a species, we will go the distance to protect our sustainability (transcending time) and uniqueness (our IP).

Ask the New York Times (NYT). The newspaper juggernaut is in a legal battle with OpenAI, claiming the tech giant is guilty of copyright infringement of NYT’s material.

Creativity Is Human

From an emotional intelligence standpoint, our uniqueness is our individual creativity.

AI is not an individual capable of being creative.

What we call creativity in AI is the result of someone who puts sweat and tears into their programming. It’s not AI that is creative; it’s the individual.

With that in mind,

If we ever fear a day when AI might overtake the world and obliterate our uniqueness, we must ask ourselves,

What makes us create tools that can destroy us?

Drones Upon Us

Part 3: The Desire to Feel Understood

In her TEDx talk, Sasha Luccioni mentions a tool she created: the Stable Bias Explorer. Her tool allows us to examine biases by looking at professions, making it easier to understand how biases can occur in AI models.

What do you think Sasha discovered when she explored images AI models generated when she put the word “scientist” in her prompt?

Most “scientist” images were of males in lab coats, wearing glasses. The males were older white men. The AI models she tested (there were several) did not come up with female scientists. What does that tell you?

Sasha tested over 150 professions. Most AI images represented older white males. However, that bias changed when she asked to see what a terrorist looked like. Guess what colour of skin the person had? Imagine their hair colour (and nope, it was not white.)

As you may know, a selling feature of large language models like ChatGPT is that they can tailor learning to a student’s pace.

Imagine your daughter or granddaughter prompting an AI model to produce an image of a powerful woman, and they see a man-eater or a scantily clad woman with breasts five times the size of her waist. As they stare in disbelief at the computer screen, and you can see the hurt on their face, what will you tell them?

Powerful Woman?

What will be your explanation to justify the ongoing presence of biases in AI in 2024? I hope she feels understood.

In Conclusion

As we explore the relationship between humans and AI, it becomes clear that AI doesn’t exist in a vacuum.

We are the gods who build AI in our image.

Instead of faulting AI for its shortcomings, we must take ownership and infuse more emotional intelligence into it before, during, and after its deployment.

Let us create an AI that is a testament to our brilliance and compassion. We are all in this together.

🌟 Elevate Your AI with the Power of Emotional Intelligence! 🌟

In an age where AI takes center stage, how do you ensure that your technology remains in tune with the human touch?

Dive into the future with Anne Beaulieu, the foremost expert in #EmotionalTech.

With her unparalleled expertise, Anne will guide your organization to seamlessly infuse emotional intelligence (EI) into your AI.

Don’t just keep pace with the digital era; lead it with a more genuine and human-centric experience for both your customers and team.

It’s not just about intelligence; it’s about emotion, connection, and true innovation.

🔗 Bring in Anne Beaulieu today and transform the way your organization connects and communicates through AI!

I trust you found value in this Emotional Tech© article in The Curious Leader. Leave a comment below. And please subscribe to The Curious Leader channel.

Anne Beaulieu

Emotional Tech© Engineer

Human-Centric AI Advocacy | Generative AI | Responsible AI

#technology #technologydevelopment #technologynews

#emotionaltech #ai #aitechnology #artificialintelligence

#emotionalintelligence

#chatgpt #training #promptengineering #promptengineer

#ethics #aiethics #responsibleai

#machinelearning #LLM #deeplearning

--

--

Anne Beaulieu
The Curious Leader

Emotional Tech© Engineer | Emotional Intelligence, Strategic Planning, AI Integration, Mega-Prompting & Knowledge Base Building Services