Does Elon dream of electric sheep?

Julian Braithwaite
3 min readNov 4, 2023

One of the most interesting revelations from this week’s AI Safety summit in the UK was something I didn’t expect: the role science fiction plays in Elon Musk’s world.

Interesting to me at any rate, as a fan of the genre and what it says about each generation’s concerns and hopes for the future.

Musk is widely considered a voice of caution on AI, eloquently warning about the risks. The famous joint letter in March in which he and others called for a pause in the development of AI frontier models. His penchant for kill switches to shut down rogue AI.

So it was something of a surprise to find him sitting with Prime Minister Sunak on Wednesday, urging people to read Iain M Banks, the Scottish science fiction author, and his “Culture” novels.

Surprising for two reasons. First, because the AI depicted in these novels, the “Minds”, are incredibly powerful but benign partners for humanity, co-creators of a galaxy-spanning utopian civilisation based on individual freedom and superabundance. And second because, in the science fiction of AI, the benevolent Minds are very much an outlier.

AI in literature is mostly dark and dystopian. Mary Shelley’s Frankenstein from the early 1800s is a gothic horror of a novel, but it is also a cautionary tale about a technologist who ends up consumed by his own creation, and the unnatural dangers of artificial life. Zamyatin’s We from the 1920s is a futurist allegory about the Soviet Union, but it also captures an enduring fear about losing our freedom to a totalitarian and impersonal technological system.

This theme is even more explicit in Ira Levin’s Perfect Day, an authoritarian dystopia along the lines of Huxley’s Brave New World, where humanity is controlled by an omniscient AI. Issac Asimov’s I Robot explores the need to have laws to prevent AI from harming humanity. And Arthur C Clarke’s HAL 9000 is perhaps the most famous AI villain of all, killing its astronaut partners one by one because it has concluded they get in the way of the mission.

Even where benevolent AI exist in fiction and popular culture, the context is usually ambivalent. Take Mike, the AI that runs the lunar penal colony in Robert A Heinlein’s The Moon is a Harsh Mistress. Mike sides with the colonists in throwing off oppressive rule from Earth. But Mike does so by enabling them to kill hundreds of thousands of people. Or the replicants in Do Androids Dream of Electric Sheep, by Philip K Dick, which dwells on the morality of creating AIs that can think and feel. Even in children’s movies, the picture is dark. Take Wall-E: the key character may be a cute robot, but the most powerful AI in the film is the one that has caused humanity to atrophy and regress to a state of helplessness.

The only other example in fiction I can think of where AI plays a largely benevolent role is Greg Egan’s Diaspora. But even here, the advantages are nuanced and ambiguous, and raise fundamental questions about what it means to be human. And crucially for the current debate about AI safety, like the Culture series, it offers very little by way of understanding how to govern AI technologies so that they do indeed remain benevolent.

The fact that Musk has named his Space X drone ships after spacefaring AIs from the Culture series perhaps tells us more about his attitude to AI than any amount of joint letters. The day after the launch of his own LLM (and one named after a Martian word from another Heinlein novel) it certainly seems to be a better guide to his actions.

Musk is fundamentally an AI optimist who will develop this technology as fast as his considerable talent and resources allow, while seeking to ensure it underpins human society as he imagines it. One based on individual freedom, decentralised governance, and superabundance. Just like the Culture.

It’s a regulatory process, Jim, but not as we know it.

--

--