THREAT CATALOG

Artificial Intelligence In The Age Of Trump

Will our immersive future be evil, euphoric or mundane?

Peter Feld
There Is Only R

--

Image: Pokémon GO and Getty via deathandtaxes

This month’s frog-march of tech execs to Trump Tower made me think back sadly to last summer, when “President Trump” was a dadaesque joke and my main worry concerning VR and AR was that the virally popular geocaching game Pokémon GO could become a tool of the surveillance state.

It was an innocent time.

Trump’s tech-and-pony show was a sobering reminder of whose watch will oversee the coming explosion in immersive technologies to rival or exceed the impact of the ‘90s Web and ‘10s smartphone revolutions.

Now, these dazzling innovations will develop under an administration with authoritarian leanings, an opportunistic view toward the rule of law, and no external constraints. Likening the “grim” assembled Silicon Valley executives to Tom Wolfe’s ‘80s-finance “Masters of the Universe,” Fortune’s Andrew Nusca observed, “One glance at a photograph from the meeting reveals that today’s Masters have entered another dimension.”

Earlier this month, the 92nd Street Y hosted its Future Today summit. A dazzling video at the start of a session with IBM Watson’s CTO Rob High showcased the wonders that AI will bring in just the next few years. But as the video recounted the evolution of Alan Turing’s vision of a future when humans and machines are indistinguishable, I began to worry about the implications of these “deep learning systems” being developed under President Trump’s repressive control.

Artificial Intelligence video and session at the 92nd St. Y’s Future Today summit

“In the next five years,” promises the video, neural networks “will assist teachers in the classroom… AI will completely transform life on earth. For some people, that transformation won’t be a welcome change.” The video ends ambiguously: “We are creating the future in our own image… So what happens when, after 150 years, we finally get what we say we want? What happens in the near future, when we are surrounded by machines that can actually think?”

In the next five years… yes. And who’s running things during that time? An unrepentant sexual predator with authoritarian leanings and a cheerful disregard for agreed-upon facts. A grifter with Paranoid Personality Disorder who casually dismisses civil liberty protections and due process, an ally of technofuturist sociopath Peter Thiel — with a similar willingness to use the courts (and the services of Gawker-killing attorney Charles Harder) to shut people up.

“In the next five years AI will completely transform life on earth.” Yes. And who’s running things during that time?

Advances in artificial intelligence will power the VR and AR industries that Silicon Valley is racing to launch. Joshua Kopstein of The Intercept warns that “the very systems that enable immersive experiences are already establishing new forms of shockingly intimate surveillance.” As Kopstein foresees it, “the psychological aspects of digital embodiment — combined with the troves of data that consumer VR products can freely mine from our bodies, like head movements and facial expressions — will give corporations and governments unprecedented insight and power over our emotions and physical behavior.”

Kopstein describes a new “emotion detection” analytics industry that uses sensors and head/eye tracking to “unlock human emotion” (as Baton Rouge VR startup Yotta Technologies claims to do). The resulting inner portrait, prized by marketers as well as spooks, will become all the more rich and complete if VR outgrows head-mounted devices (HMDs) and is beamed “straight into the ocular nerve,” as Creative Control director Benjamin Dickinson told us may happen.

The combination of unchecked VR surveillance and a government headed by Donald Trump brings to mind an ominous 2014 Slate article by David Auerbach about “Roko’s Basilisk,” an “evil, godlike form of artificial intelligence, so dangerous that if you see it, or even think about it too hard, you will spend the rest of eternity screaming in its torture chamber.”

As Auerbach reported, Roko’s Basilisk was proposed in a controversial message posted to technofuturist site LessWrong:

What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way…for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?

According to “timeless decision theory,” you will be punished in the future for failing now to help bring about the all-powerful, malevolent AI — who may have created you just for the purpose of simulating the universe! In that case, warns Auerbach, “you’d best make sure you’re devoting your life to helping create Roko’s Basilisk! Because, should Roko’s Basilisk come to pass (or worse, if it’s already come to pass and is God of this particular instance of reality) and it sees that you chose not to help it out, you’re screwed.”

What if Roko’s Basilisk is Donald Trump?

Does anyone seriously doubt that Trump, a predator capitalist who looks aspirationally at Vladimir Putin and his $85 billion empire, would hesitate to use the Singularity to punish those who resisted his rise — bigly? Putin’s vengeance, of course, knows no borders: just this week, a Kazakh man was sentenced to three years in a penal colony for insulting the Russian oligarch on Facebook.

What can “friendly AI” mean to technofuturists like Trump ally Peter Thiel — who clearly believes retribution for dissent should be anonymous and fatal?

As The New Republic’s Brian Beutler wrote earlier this month, Trump’s “witch hunts” — demanding the names of Energy Department officials working on climate issues (and, more lately, State Department officials involved with gender equality initiatives) — and his public campaigns against jurists and political journalists have already had a chilling effect. “Just as we can’t know how many civil servants will self-censor, we can’t know what effect Trump’s press intimidation is having and will have on the kind of coverage he receives, or whether judges will treat him more leniently going forward out of fear of retribution.”

In other words, many in the press, government and the judiciary are probably already treating Trump like he’s Roko’s Basilisk. Yes, only conspiracy theorists worry about governmental mind control — but is that fear still so ridiculous once the government can access the emotional detection industry’s rich data, perhaps in the name of fighting ISIS? Facebook and other tech companies are already opaque about their willingness to share data with the Trump regime in developing a potential registry of Muslims. What will happen when that data is turbocharged by artificial intelligence?

Silicon Valley futurists claim to be concerned with the shape AI could take. LessWrong founder Eliezer Yudkowsky’s Machine Intelligence Research Institute promotes “friendly AI” — though one might question what “friendly AI” could mean to his funders like Thiel, who clearly believes retribution for dissent should be anonymous and fatal.

The tech industry is already opaque about its willingness to share data to develop a potential Muslim registry. What happens when that data is turbocharged by AI?

A more uplifting vision comes from another Yudkowsky backer, Ray Kurzweil, who forecasts “nanobots that can go into a brain non-invasively through the capillaries,” giving us an “additional neocortex” that we’ll use to “add additional levels of abstraction.” When this happens, Kurzweil predicts, “We’ll create more profound forms of communication than we’re familiar with today, more profound music and funnier jokes. We’ll be funnier. We’ll be sexier. We’ll be more adept at expressing loving sentiments.” Author Kevin Kelly offered mild skeptism of euphoric “thinkism,” which he described as “this idea that thinking about things can solve problems — that if you had an AI that was smart enough, you could solve cancer because you could think about it.”

Magic Leap, the Florida-based company Kelly profiled last spring for Wired, has been the focus of some of the loftiest expectations of AR. Those hopes took a hit recently when it emerged that demo videos the company uses to show off its technology were really just special effects from their partners Weta Workshop, not real AR.

Magic Leap/Weta Workshop video

If you had an AI that was smart enough… perhaps Magic Leap could solve its demo problem. But as I watched, it struck me that AI and AR may be neither euphoric nor evil, just mundane. In the demo, up popped a Gmail window in a flat representation of immersive 3-D, cycling through subject lines like “Confirm your appointment w/ Dr. Kaplan” and “Stacy Gerald tagged you in a photo.” All the unwanted, annoying notifications of today, except immersive.

Maybe the future will be more like today than we realize.

Or maybe Donald Trump will turn out to be the “brain police” Frank Zappa warned us about in a very prescient song from 50 years ago.

“Who Are The Brain Police?” Frank Zappa and the Mothers of Invention (1966)

--

--

Peter Feld
There Is Only R

Director of Research, The Insurrection (@Insurrectionco)