Is Elon Musk right about Steven Pinker and AI?

Michael Barnard
The Future is Electric
6 min readMar 11, 2018

Musk appears to be right, but misconstrued in his comments. Pinker was on a book tour and might be understating or misstating his understanding to create awareness for his book.

What did Pinker say?

Some threats strike me as the 21st-century version of the Y2K bug,” he says, referring to the mistaken panic that because of a flaw, dates with the year 2000 and beyond would cause computers around the world to go haywire.

This includes the possibility that we will be annihilated by artificial intelligence, whether as direct targets of their will to power or as collateral damage of their single-mindedly pursuing some goal we give them,” writes Pinker in The Globe and Mail.

Is that it? No, Pinker went on in an interview with Wired.

Let’s take these statements apart a bit, but start with Pinker’s very relevant background. His actual area of deep expertise is cognitive psychology. He undoubtedly has a much richer understanding of how humans think than the vast majority of people.

Human’s explicitly exhibit general AI, if you limit the choices to general or functional. However, general AI doesn’t have to follow human patterns of thought, ideation and memory. That was certainly attempted early and often until researchers really understood how complex the human mind is.

The potential for catastrophic danger from AI is general AI, not functional AI. Functional AI is just immensely disruptive to society by displacing human jobs just as automation as always done since the invention of the automated loom. Functional AI will do it faster and across a much broader range of industries.

So Pinker is undoubtedly deeply knowledgeable about cognition under one model. His publicly accessible writing on the amazing progress of humanity over the past decades and centuries makes it clear that he has a broad set of interests and an ability to research beyond the average person’s as well. Given the intersection of cognitive psychology and AI research over the past decades, it wouldn’t surprise me if Pinker had much deeper knowledge of this space.

But Pinker’s public intellectual persona is highly optimistic. He sees lower risks and easier solutions than others do, something I agree with. I’m a pragmatic, data-centric techno-optimist concerned with the greatest good for the greatest number, which puts me pretty clearly in Pinker’s camp from a basic mindset.

On the vast majority of fronts, Musk is a techno-optimist as well. He wouldn’t think humans could live on Mars if he was easily dissuaded by technical challenges. He wouldn’t have built the most disruptive car company in the world if he wasn’t optimistic about electrification as a wedge against global warming. He wouldn’t have tackled landing first stages on automated barges named after fictional AIs far out in the Atlantic if he wasn’t optimistic about technology.

But Musk is very concerned about general AI. He’s founded a functional AI company in Tesla. He founded an AI-specific company, OpenAI, focused on safe general AI. He co-founded an AI safety consortium.

Part of this reasoning is from a subset of Silicon Valley thinking which does a purely mathematical risk assessment against potential threats. They are rational until their brains fall out at the extremes, which doesn’t describe Musk.

His premise is not that a general AI wiping out humanity is likely, but that the impact of a general AI doing that is very high. In risk management you multiply the likelihood by the size of impact to get a magnitude of risk, and you quantify and qualify likelihood and impact carefully.

This leads to interesting perspectives. Stuff which is deeply unlikely or only likely in the distant future, but which has a very high impact, gets inordinate attention from this set of thinkers. There are people there who literally say “screw climate change, asteroid impact is the only thing to pay attention to”.

But that doesn’t describe Musk. He’s managing down climate change, getting our eggs out of the basket of Earth alone and building lots of solar.

But he’s also deep into functional and general AI. And Musk doesn’t do things by half. I guarantee he knows more about the reality of both functional and general AI than Pinker does. And that he doesn’t spend much time reading Pinker’s books. They just aren’t interesting enough for him, I suspect.

The Globe and Mail quote from Pinker is more of a functional AI analogy. But it’s not actually a debating premise in a direct discussion with Musk. And Pinker is a popular communicator. He’s not trying to be deeply precise, but to create an evocative comparison that people can relate to.

Pinker’s quote from Wired isn’t as intelligent and is much harder to wave away than his comments from his book. He’s pooh-poohing Musk’s concerns about general AI by attacking Musk’s efforts around functional AI.

That does expose a greater gap in Pinker’s thinking than his background would suggest. It does suggest he doesn’t understand the difference.

And that’s what Musk’s response was about. It’s worth breaking down because their are multiple parts to it, and he’s been misquoted in headlines as well.

Wow, if even Pinker doesn’t understand the difference between functional/narrow AI (eg. car) and general AI,

This is a sign of deep respect. “Even Pinker”: Musk knows he’s a deeply educated guy with expertise in cognitive psychology. Musk isn’t diminishing Pinker, he’s respecting him and surprised at Pinker’s apparent blinders — note: public, brief statements, not discussion with Musk, not academic ones, on a book tour — around general vs functional AI.

when the latter *literally* has a million times more compute power and an open-ended utility function,

General is deeply more sophisticated and unknowable than functional. I’ve been involved with proposals where requirements didn’t allow us to bid key technologies because neural net technology wouldn’t allow traceability of decision making. It was part of our solution thinking. It came up recently on another proposal for a client. They wanted to ensure that the logic and rules were parsable for a component of the solution we were bringing to bear. And those were both functional AI or simply analytics examples. General AI is where things get weird.

humanity is in deep trouble

This pull quote gets a lot of attention, but note that it follows the two preceding clauses. The first is acknowledgment that one of the most sophisticated public thinkers with expertise which overlaps with this space appears to have a cognitive gap, the second is the nature of the difference and that if Pinker can’t get it, then almost no one will be able to. That’s why humanity would be in deep trouble, not necessarily because of AI.

Musk founded the general AI safety consortium and OpenAI to explicitly deal with the ethics, technologies and safeguards that must necessarily be built into general AIs to prevent the downsides. He wants us to be able to leverage its advantages, but he’s going to put the right checks and balances in place.

He’s not saying that general AI will put humanity in deep trouble, he’s saying that not putting the right bounds upon any general AI we create will leave us in deep trouble. He’s saying that lack of comprehension of the requirement for those bounds by even deep thinkers like Pinker is troublesome.

--

--

Michael Barnard
The Future is Electric

Climate futurist and advisor. Founder TFIE. Advisor FLIMAX. Podcast Redefining Energy - Tech.