Welcome to the Wild West of Artificial Intelligence

Anne Beaulieu
The Curious Leader
Published in
6 min readOct 19, 2023
Human Life Value?

Attack drones are legal, but are they immoral? Is collateral damage the difference between legality and morality? By applying emotional intelligence to technology, we get to mitigate collateral damage and bridge the gap (if any) between legality and morality.

In the Wild West (1800s), justice was often served arbitrarily and without due process. It was carried out at the end of a gun whose bullets symbolized revenge, greed, and power.

Wild West

Today, we are no longer staring down the barrel of a Colt 45 Revolver, the most popular firearm of the Wild West era. We are standing in our kitchen as a modern version of Billy the Kid takes the law into its own hands and signs off on a drone attack.

This article explains what an attack drone is and how it works. We share stats about attack drones and how the public feels about them. We discuss how the rapid rise of AI is turning into the Wild West, and we offer emotionally intelligent guidelines that honour humanity and peace.

Attack Drone Selling Point: Taking Humanity Out of the Equation

Don’t take it personally. Attack drones don’t care about human life. Their job is to carry out maximum damage in a few seconds. It’s their main selling feature: taking humanity out of the equation.

Attack drones are flying devices loaded with missiles, bombs, and other weapons. Don’t worry about sending the bomb squad or wearing protective gear. Whoever holds the joystick controlling an attack drone is safely sitting thousands of miles away, likely wondering what they will have for lunch that day.

Drone Joystick

As for those selling drone attacks, they like to weave a tale of “target acquisition,” “weapons payload,” and “return and recovery” to explain why the end justifies the means.

I wonder.

⚫ Is holding a gun more or less courageous than holding a joystick attached to a bomb?

⚫ Does loading a drone with missiles ever pay off?

⚫ Is the feature “safe return to its base of operation” something that beholds accountability?

We cannot ask attack drones these questions. AI does not feel. So I’m asking you, the reader. Does taking humanity out of the equation justify the means?

When Taking Humanity Out of the Equation Gets a Nobel Peace Prize

There are two types of drone attacks:

“Signature strikes” target groups with common characteristics, but the identities of the persons in the group are not known. These strikes are also called “Crowd Killings” for their collateral damage.

“Personality strikes” target a specific person. But do you believe targeted people do not surround themselves with elders, women, and children? Maybe there is just one type of drone attack, after all.

During the first days of his presidency, Barack Obama signed off on a signature strike. “At about 8:30 in the evening local time, a Hellfire missile from a remotely operated drone slammed into a compound of interest, obliterating a roomful of people. It turned out they were the wrong people.” (source: Newsweek)

Obliteration

Collateral damage did not stop Obama. As far as we know, he ordered 563 drone strikes during his presidency. I believe the number to be much higher. But who’s counting? Probably not the committee that awarded him the Nobel Peace Prize.

In contrast, the Bush administration ordered 57 drone strikes. They did not receive the Nobel Peace Prize. However, Obama did not invade Iraq to remove illusionary weapons of mass destruction.

When Legal Becomes Immoral

You may remember Edward Snowden, the computer intelligence consultant who put his life on the line to expose how Big Brother was using AI technology to spy on us and more.

Let’s not forget Julian Assange, the founder of WikiLeaks. He got arrested for publishing a series of leaks from a U.S. Army intelligence analyst who blew the whistle on a Bagdad air strike and divulged military logs from the Afghanistan and Iraq wars.

What drives someone to risk their life to reveal how we use AI? Could it be that they questioned the legality and morality of actions taken in the name of “national security”?

You might think I have just shared two extreme positions. But let me be clear.

In 2013, two out of three Americans (65%) surveyed were for using attack drones (AI weapons) in other countries.

In 2023, 61% of Americans surveyed said they believe AI could threaten the future of civilization.

What has changed?

In the ten-year gap between 2013 and 2023, the trust around AI weapons completely flipped. Why?

⚫ Are we willing to question more the legality and morality of our governments?

⚫ Are we less likely to trust the guy holding the joystick attached to a bomb?

⚫ Or are we simply realizing that we could become collateral damage, an inhumane statistic in a military log?

Let’s Not Wait Till It’s Too Late

A recent survey by the AI Policy Institute found that “A whopping 72 percent of American voters want to slow down the development of AI.”

Yes, AI is moving super fast. But putting the brakes on its evolution is not the solution.

The real problem is not that AI is moving fast. It’s that we do not have enough emotionally intelligent policies governing AI. That’s different.

For AI to operate within legal and moral guidelines, it needs guardrails.

A guardrail is a safety barrier that prevents mistakes or problems.

Just like guardrails on the road prevent cars from going off the edge, safety barriers can prevent AI from harming us.

I prompted ChatGPT,

What emotionally intelligent guardrails are often missing in AI technology?

The answer:

  • Empathy-centered design: Designing products that increase emotional well-being
  • Transparency: Having a clear understanding of how AI can be used
  • Human intervention: Correcting when AI goes against “Do no harm”
Human Intervention with AI

In Conclusion:

Attack drones are legal, but are they immoral? Is collateral damage the difference between legality and morality? By applying emotional intelligence to technology, we get to minimize collateral damage and bridge the gap (if any) between legality and morality.

Thank you for listening.

🌟 Elevate Your AI with the Power of Emotional Intelligence! 🌟

In an age where AI takes center stage, how do you ensure that your technology remains in tune with the human touch?

Dive into the future with Anne Beaulieu, the foremost expert in #EmotionalTech. With her unparalleled expertise, Anne will guide your organization to seamlessly infuse emotional intelligence (EQ) into your AI.

Don’t just keep pace with the digital era; lead it with a more genuine and human-centric experience for both your customers and team. It’s not just about intelligence; it’s about emotion, connection, and true innovation.

🔗 Bring in Anne Beaulieu today and transform the way your organization connects and communicates through AI!

I trust you found value in this Emotional Tech© article in The Curious Leader.

I would love to get your feedback. Leave a comment below. And please subscribe to The Curious Leader channel.

Anne Beaulieu

Emotional Tech© Engineer

Mega-Prompt Engineering | Generative AI | Responsible AI

#nobelpeaceprize #edwardsnowden #julianassange #wikileaks

#emotionaltech #emotionalintelligence #ai #emotionaltechengineer

#artificialintelligence #generativeai #chatgpt

#responsibleai #aigovernance #aiethics

#promptengineering #promptengineer #prompt #prompts #megaprompts

#megapromptengineering

--

--

Anne Beaulieu
The Curious Leader

I am the AI voice for high-profile experts, amplifying their refined messages while freeing up their time to do their highest value. | Emotional Tech© Engineer