Artificial Superintelligence and the Arrival Mind Paradox

Louis Rosenberg, PhD
Predict
Published in
6 min readOct 30, 2023
Superintelligence, Sentient AI, and the Arrival Mind Paradox

There’s endless conversation these days about the “existential risks” of building a super-intelligent AI. Unfortunately, the dialog often jumps over many dangers and lands instead on movie plots like Wargames (1983), in which an AI nearly triggers a nuclear war by accidentally misinterpreting human objectives, or Terminator (1984), in which an AI weapons system becomes sentient and turns against us with an army of red-eyed robots.

Both are terrific movies, but are these the real risks we face?

Sure, an accidental nuclear launch or weapons gone rogue are real threats, but military leaders already take those risks seriously. On the other hand, I contend that an artificial superintelligence (ASI) could easily subdue humanity without resorting to nukes or killer robots. In fact, it wouldn’t need to use any form of traditional violence. Instead, an ASI would simply manipulate humanity to meet its own interests.

I know that sounds like another movie plot, but Big Tech is aggressively developing AI systems with the ability to influence society at scale. This isn’t a bug in their design — it’s a goal. That’s because many of the largest corporations deploying AI systems have core business models that involve selling targeted influence. They call it marketing, but when powered by AI to characterize individuals, segment populations, and deploy content aimed at influencing specific groups, it becomes a polarizing force in society.

We’ve all seen the damage this can cause thanks to years of unregulated social media and yet traditional targeting will soon look primitive. In the near future, corporations will be able to target users on an individual-by-individual basis through personalized interactive conversations. This will be deployed using artificial agents that draw us into friendly conversation and casually deploy custom influence that is maximized for our unique views, interests, leanings and backgrounds.

Now consider this — a few weeks ago Meta unveiled AI-powered chatbots on Facebook, Instagram, and WhatsApp through partnerships with “cultural icons and influencers” including Snoop Dogg, Kendall Jenner, Tom Brady, Chris Paul, and Paris Hilton. The goal is to allow users to hold friendly conversations with famous people they admire. We can easily imagine such techniques will soon become a new frontier for targeted influence, deployed as personal conversations with famous faces. It will begin as text chat and voice chat but soon evolve into realistic avatars.

Why is this so dangerous?

As I often tell policymakers — think about a skilled salesperson. They know that the best way to sway someone’s opinion is not to hand them a brochure or play them a video. It’s to engage them in conversation, subtly expressing a sales pitch, hearing the customers objections and concerns, and actively working to overcome those barriers. AI systems are now ready to engage individuals this way, performing all steps of the process, and as I detail in this recent academic paper — we humans will be thoroughly outmatched.

That’s because these AI systems will be far more prepared to target you than any salesperson. They could have access to data about your interests, hobbies, political leanings, personality traits, education, and countless other personal details. And soon, they will be able to read your emotions in your vocal inflections, facial expressions, eye motions and posture.

You on the other hand, will be talking to an AI that can look like anything from Paris Hilton and Snoop Dogg, to a cute little fairy that guides you through your day. And yet that cute or charming AI could have all the world’s information at its disposal to guide your thinking and counter your objections, while also being trained on sales tactics, cognitive psychology, and strategies of persuasion. And worse, it could just as easily sell you a car as it could convince you of misinformation or propaganda. This risk is called the AI Manipulation Problem and most regulators still fail to appreciate the subtle danger of interactive conversational influence.

Now let’s look a little further into the future and consider the magnitude of the manipulation risk as AI systems achieve superintelligence. Will we regret that we allowed the largest companies in the world to normalize the deployment of AI agents that look human, act human, and sound human, but are NOT human in any real sense of the word, and yet can skillfully pursue human-like tactics that can manipulate our beliefs, influence our opinions, and sway our actions? I think so.

After all, a superintelligence, by definition, will be an AI system that is significantly smarter than any individual human. And if such an AI system goes rogue, it will not need to take control over our nukes or military drones. It will just need to use the tactics that Big Tech is currently developing — the ability to deploy personalized AI agents that seem so friendly and nonthreatening that we let down our guard, allowing them to whisper in our ears while reading our emotions, predicting our actions, and potentially manipulating our behavior with super-human skill.

This is a real threat and yet we’re not acting like it’s rapidly approaching. In fact, we are deeply underestimating the risk, largely because of the personification techniques described above. Today’s AI agents are already so good at pretending to be human, even by text chat, we’re trusting them more than we should. So when these powerful AI systems eventually appear to us as Snoop Dogg or Paris Hilton or some new fictional persona that’s friendly and charming, we will only let down our guard even more.

So how can we get people to appreciate the magnitude of this risk?

Over the last decade, I found that an effective way to contextualize the risks is to compare the creation of a superintelligence with the arrival of an alien spaceship. I refer to this as the Arrival Mind Paradox because the creation of a super-intelligent AI here on earth is arguably more dangerous than intelligent aliens arriving from another planet. And yet with AI now advancing at a record pace, we humans are not acting like we just looked into a telescope and spotted a fleet of ships racing towards us.

So, let’s compare the relative risks. If an alien spaceship was spotted heading towards earth and moving at a speed that made it five to ten years away, the world would be sharply focused on the approaching entity — hoping for the best, but undoubtedly preparing our defenses, likely in a coordinated global effort unlike anything the world has ever seen. Some would argue that the intelligence species will come in peace, but most would demand that we prepare for a full-scale invasion.

On the other hand, we have already looked into a telescope that’s pointing back at ourselves and have spotted a superintelligence headed for earth. Yes, it will be our own creation, born in a corporate research lab, but it will not be human in any way. And let’s be clear — the fact that we are teaching this intelligence to be good at pretending to be human does not make it less alien. This arriving mind will be profoundly different from us and we have no reason to believe it will possess humanlike values, morals, or sensibilities. And by teaching it to speak our languages , read our emotions, write our programming code and integrate with our computing networks, we are making it a more dangerous threat than an alien from afar.

Still, we don’t fear the arrival of this alien AI — not in the visceral, stomach turning way that we would fear a mysterious ship headed for earth. That’s the Arrival Mind Paradox — the fact that we fear the arrival of the wrong aliens and will likely do so until it’s too late to prepare. And if this alien AI shows up looking like Paris Hilton or Snoop Dogg or countless other familiar faces, and speaks to each of us in ways that individually appeal to our personalities and backgrounds, what chance do we have to resist?

Yes, we should secure our nukes and drones — but we need to have just as aggressive protections against the widespread deployment of personified AI agents. It’s a real threat and we are not prepared.

Louis Rosenberg, PhD is a longtime technologist. He is known for founding the early VR company Immersion Corporation (IMMR: Nasdaq) in 1992 and the artificial intelligence company Unanimous AI in 2014. He is also known for developing the first mixed reality system as a researcher for the U.S. Air Force. He earned his PhD from Stanford and was a professor at California State University.

--

--

Louis Rosenberg, PhD
Predict
Writer for

Computer Scientist and Author. Founder of Unanimous AI. Founder of Immersion Corp. Founder of Outland Research. PhD Stanford. Over 300 patents for VR, AR, AI