The Mounting Perils of Artificial Intelligence

Denzel Damba
5 min readOct 23, 2023

--

Artificial intelligence has advanced exponentially in recent years. Systems like ChatGPT showcase abilities once found only in science fiction. But profound power inevitably carries profound risks. As AI progresses, how can we prevent these technologies from harming or controlling humanity?

In this piece, I will share my perspective on the mounting perils of artificial intelligence based on what I have witnessed so far. Powerful AI systems already exist today that threaten our privacy, security, and autonomy in alarming ways. We ignore these dangers at our own peril.

Surveillance and Control in China

Consider China’s dystopian “social credit system” — essentially a nationwide reputation score. A vast AI-enabled surveillance network monitors citizens’ behavior and assigns “trustworthiness” ratings. Jaywalking, late bill payments, even associating with low-score individuals can crater your score. High scores grant perks, while low ones limit your rights and access.

This system has already been used to ban millions from buying plane and train tickets. It enables unprecedented government control over people’s lives.

We cannot become complacent believing such centralized AI monitoring could never happen in democratic nations. Powerful corporations already track and profile us extensively. Without sufficient safeguards, AI-enabled mass surveillance threatens freedom and privacy worldwide.

Data Abuse by Big Tech

Consider the invasive data harvesting by companies like Facebook and Google. Through tracking our online activities, they can often infer incredibly sensitive information — religious beliefs, sexual orientation, illnesses, political leanings, and more. AI algorithms utilize this data to better target and manipulate users.

For example, Facebook’s advertising platform has allowed discrimination based on protected attributes like race and gender. Algorithms discriminate through proxies they identify in peoples’ data. One study found Google showed high-paying job ads far more frequently to men than women with the same qualifications.

These practices amount to AI-enabled civil rights violations. But little recourse exists. Tech companies operate largely above the law regarding data use due to regulatory capture and lax oversight. This utter lack of accountability threatens core liberties.

Eroding Privacy and Anonymity

Today’s AI endangers not just freedom of information and belief, but freedom of association. Algorithms correlate data across platforms to de-anonymize internet activity not intended to be linked to one’s identity.

For instance, research shows anonymous website accounts can be tied to real identities by examining writing patterns, ideas expressed, and connections to non-anonymous accounts. This enables building extensive behavioral and psychological profiles without users’ knowledge or consent. No online interaction may truly remain private any longer.

Deepfakes Distorting Reality

Advances in generative AI like deepfakes cast doubt on the authenticity of any media. Sophisticated algorithms already generate doctored images and videos — like political figures saying words they never spoke — that most observers cannot distinguish from reality.

This ability to fabricate falsified evidence could have devastating societal impacts. Public discourse could devolve into chaos and mistrust. Surveillance video or audio used in court may require AI authentication to verify it isn’t manipulated. Journalism and whistleblowing could become fruitless when no evidence carries indisputable credibility.

Undermining Human Judgment and Oversight

In addition to threatening privacy and truth, AI challenges notions of human accountability and control. When opaque algorithms shape outcomes like credit eligibility, hiring, healthcare, and policing, how do we remedy errors or bias? How do we determine who — if anyone — bears responsibility?

Today’s systems lack adequate transparency, oversight, and redress. Ironically, the AI promoting these systems as more objective than human judgment defies scrutiny itself. Attempts to examine algorithms for bias regularly face resistance, including claims of proprietary secrecy. But handing decision-making power to black boxes devoid of accountability is reckless.

Autonomous Weapons and Loss of Control

The most terrifying threat from AI comes in autonomous weapons systems. Removing human supervision over lethal force crosses a bright ethical line. Yet militaries are aggressively developing “killer robots” such as drones, tanks, and sentry guns capable of deadly action absent human direction.

Russia already uses them in Ukraine today. The US, China, and others aren’t far behind. But autonomous targeting based on algorithms immediately raises moral hazards. How do we prevent civilian casualties or war crimes? If an autonomous system errs, who is responsible? Geopolitical conflicts could rapidly spiral out of human control.

Some argue autonomous weapons will reduce battlefield deaths by being more precise. But this assumption is unproven. It risks a destabilizing arms race toward AI without appropriate safeguards. The consequences could be existential. Preventing a horrendous loss of human control is paramount.

Ongoing Overreach in Daily Life

While autonomous weapons represent an extreme, AI overreach increasingly creeps into everyday domains. “Smart city” initiatives use networked sensors, algorithms, and facial recognition to manage public services. But they enable granular monitoring of citizens’ movements and activities. Any such systems must prioritize personal freedom.

AI chatbots like Alexa raise similar concerns in private. Amazon retains transcripts of user interactions by default. What you say in your own home becomes corporate property. Virtual assistants must protect privacy and restrict use of intimate personal data.

These issues will only grow thornier as AI interfaces become more conversational and emotionally responsive. We must intercede before our homes essentially become private surveillance states and our closest confidantes algorithmic spies.

Protecting Humanity in an AI Age

In this piece, I aimed to highlight what I consider the most pressing perils of artificial intelligence based on current capabilities. Powerful AI systems already endanger core human rights and liberties without sufficient accountability. But this need not remain our reality.

With ethical technology design, wise policymaking, and a shared commitment to human dignity, we can prevent AI from dehumanizing society. Institutional reforms granting individuals more control over their data would be a critical starting point.

I do not argue against AI’s development, only for appropriate precautions. Handled carefully in the public interest, these technologies could empower humanity in unprecedented ways. But we must stay vigilant. For AI will shape the 21st century profoundly — for good or for ill -depending on the values we instill in its design.

I welcome a diversity of perspectives on this consequential issue. What potential perils most concern you? And what steps should we take to ensure AI promotes human flourishing? Please share your insights below.

--

--