Marvin Minsky’s Marvelous Meat Machine
Steven Levy
54120

With Minsky’s Passing, the Passing of a Dream

The pioneers of “strong AI” — the idea that machines have the capacity to one day think like humans — are leaving us. First Joseph Weizenbaum, the creator of the ELIZA chatbot, in 2008; then, in 2011, John McCarthy, who is often credited with inventing the term “artificial intelligence” itself; and now Marvin Minsky, the co-creator, with McCarthy, of the MIT Artificial Intelligence Project, which would one day become that university’s renowned Computer Science and Artificial Intelligence Laboratory.

I am taking all of this badly. As the world loses them we also seem somehow to be losing the romantic mid-century dream of a universally intelligent machine, something to conjure into being for its own sake, simply to see if we humans have the ability to do it. In 2016, our focus in computing seems to have shifted from philosophy to functionality. It no longer feels fashionable to pose grand questions about the nature of human intelligence, the nature of machine learning; instead we design programs meant to address specific problems or to perform specific tasks. (This is important work, of course — in fact it is possible to argue that it is more important work, at least in the short term, than the fantasy of a more broadly intelligent machine; but it somehow lacks both the whimsy and the boundless sense of possibility characteristic of the work of Minsky and McCarthy, among others.) To me, Siri is not exciting: her job is only to serve. IBM’s Watson comes closer, but Watson, at least for the time being, seems limited by its creators to a very specific set of desired outcomes. Neither approaches Minsky’s dream of future machines that could be creative, even artistic — if they wanted to be. I’m not certain, in fact, that industry has much use for this goal. In a 1982 article in AI Magazine titled “Why People Think Computers Can’t,” Minsky wrote,

When computers first appeared, most of their designers intended them . . . only to do huge, mindless computations . . . Yet even then, a few pioneers . . . saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.

With the passing of Minsky and his cohort, I hope we don’t lose sight of their original, ambitious vision: to test the boundaries of artificial intelligence, and to prove that the line between the human brain and the inner workings of a machine might not be so distinct.

The Unseen World, to be published in July, contains a Minsky-like figure and questions the future of artificial intelligence

I should reveal myself now: I’m a novelist, not a computer scientist. I am, therefore, embarrassingly unqualified to weigh in here — as I write this I am picturing actual computer scientists reading it and scoffing, and then going back to work on the very programs and systems I’m attempting to discuss. It would be very easy to convince myself that I have no right to be part of the conversation. But these days I have a vested interest in it: I’ve spent the past five years or so researching and writing a novel called The Unseen World, titled thus after a philosophical treatise by the physicist and mathematician A.S. Eddington.

The novel, which I’ve now completed, centers on a Minsky-like figure and his daughter, a child prodigy, and is set primarily in Boston in the 1980s. A third central figure in the novel is the program on which the protagonists are at work: a program called ELIXIR that is meant to acquire intelligence in a human-like way. As I was writing it, I became both very familiar with and deeply attached to the ideas of those mid-century AI pioneers, to their work and to the work of those who came even before them. I researched this novel extensively, relying heavily on the feedback and help of friends, or friends-of-friends, who work in tech or in computer science labs at universities. When I had finished, I felt as invested in ELIXIR as I did in the novel’s human protagonists. And it made me both excited for the future and saddened that the goals of those pioneers seem, recently, to have been put aside for other, perhaps more easily monetized, ambitions.

Yes, we still have Douglas Hofstadter, whose philosophical inquiries into the nature of machine and human consciousness once profoundly influenced both computer science and pop culture, and still do. Doug Lenat and Mary Shepherd have slowly but surely been building CYC, “a computer with common sense,” since 1984. But these thinkers exist, now, outside the mainstream. They are the outliers, considered by some to be monomaniacal or outlandish in their views, fixated on some Jetsons-ish vision of the future that, if this is possible, seems simultaneously wildly ambitious and also somehow kitschy. (Many, too, dismiss all anthropocentric visions of machines out of hand.) Ray Kurzweil is perhaps the thinker who has most successfully and obviously crossed over into the mainstream, but he has also quite successfully alienated a number of other computer scientists who find some of his lines of inquiry a bit too New-Age for their liking.

We are now missing, it seems to me, a central figure in computing, a Virgil to offer us inspiration and guidance, to usher us into the next era of human and machine existence with wisdom and grace and a concern for the greater good. Part of this absence is tied up in a lack of government funding for “pure” scientific research — the type that furthers a quest for knowledge, rather than financial gain. Part of it is perhaps that as the general public becomes more used to technology in our daily lives, we have stopped remembering to marvel at it, to wonder what the last step is in a technological chain that includes the world-expanding devices that, today, we rarely detach from our palms.

Marvin Minsky, in a 2015 video interview with the MIT Technology Review, commented on this phenomenon.

My impression is that the last 10 years has shown very little growth in artificial intelligence. We’ve been mostly attempting to improve systems that aren’t very good, and haven’t improved much in two decades. The 1960s, ’50s and ’60s, were wonderful. Something new every week. Have to get rid of the big companies and go back to giving support to individuals who have new ideas because the attempt to commercialize the existing things [hasn’t] worked very well. Big companies and bad ideas don’t mix very well.

Although in general I am skeptical of nostalgia and optimistic about the future, in this case I think Minsky might be on to something.