How do you solve a problem like problem-solving?
That is the motivating question behind efforts to develop general artificial intelligence — a quest brought vividly to life by Max Tegmark in his new book, Life 3.0. Tegmark, a Professor of Physics at MIT and co-founder of the Future of Life Institute, is better qualified than most to offer an answer. Life 3.0, though, is as much a sketch of the road ahead as a prescribed set of directions — for the simple reason that, as Tegmark shows, the challenging task of deciding these directions demands input from us all.
Tegmark’s basic notion is that artificial intelligence, here defined (with deliberate openmindedness) as “non-biological intelligence”, might eventually take on the characteristics of the titular “Life 3.0” — a life form that is able to design its own software and hardware. This contrasts with “Life 1.0”, which constitutes basically all non-human forms of earthly life, and is characterised by an inability to design its own hardware or software. We humans, as the preeminent earthly example of “Life 2.0”, are similarly unable to design our own “hardware” beyond the occasional hip replacement, but have become remarkably adept at updating our “software” — through learning languages, playing chess and so on. A form of “Life 3.0” would be able to self-improve both software and hardware, becoming “the master of its own destiny, finally free from its evolutionary shackles.”
To be sure, Tegmark never succumbs to a fully deterministic view of AI, seldom straying far from the view that we, as humans, will chart the course of a technology which may ultimately supersede our own intelligence. But nor does Tegmark, a cosmologist, shy away from the celestial scope that developing and deploying such a potentially powerful technology is likely to involve. “This tale,” Tegmark writes, “is one of truly cosmic proportions, for it involves nothing short of the ultimate future of life in our Universe.” Crucially, though, “it’s a tale for us to write” — even if, under several of Tegmark’s scenarios, we won’t be around to read it.
Such literally lofty rhetoric — when it comes to the potential of AI, “not even the sky is the limit”, Tegmark reminds us at one point — might strike some as stratospheric only in its hyperbole. Indeed, in a recent interview with The Verge to mark the book launch, Tegmark described superhuman intelligence as “obviously a way bigger deal than climate change” since it would allow us to “solve climate change and all our other problems” (italics in original). To his credit, though, in the book itself Tegmark impresses matches his rhetorical bark with intellectual bite, cogently justifying his belief in the transformative power of AI on all walks of life, through a journey that takes us from individual particles to interplanetary transportation.
This academic acid trip (I mean that as a compliment) serves as a useful if unorthodox antidote to the hype around AI. While some remain sceptical about the transformative power of AI, and while others, like Mark Zuckerberg, have embraced its more positive potential, the more alarmist perspective — given an outspoken voice by Elon Musk — has gained the most traction in mainstream media coverage (perhaps owing to the “if it bleeds, it leads” principle of reporting). Crucially, Tegmark resists the urge to push any particular prediction of what the impending AI revolution necessarily means for humanity. Indeed, ultimately Tegmark’s most useful contribution in Life 3.0 is to shift the conversation from predictions — over which, it is implied, we have no control — to our preferences over what outcome should occur.
This marked shift from what will happen to what should happen feels, at first, refreshingly empowering. But eating a la carte instead of prix fixe cuts, as it were, both ways. As everyone from statesmen to Spider-Man have said, with great power comes great responsibility — and taking responsibility for the outcomes which we are empowered to engineer means considering the myriad ethical dilemmas that super-intelligence introduces.
Nor is any item on Tegmark’s menu of “AI Aftermath Scenarios” entirely appetising. There are certainly some recognisable dishes, including both egalitarian- and libertarian-utopias, which involve trade-offs roughly analogous to contemporary policy debates over allocating scarce resources and negotiating tensions between positive and negative liberties. Generally, though, the prospective outcomes diverge markedly from our present situation. In three of the twelve outcomes, humans no longer exist, and in only three or four are we in control.
Thus while Tegmark’s insistence that “this is a tale for us to write” seems upbeat at first, it soon becomes clear that in more than half of the possible plot lines, humans will “choose” to write ourselves out of the story entirely. More precisely, Tegmark suggests that a series of bad choices on our part will create the conditions in which we fall victim either to annihilation or enslavement. (One of Tegmark’s scenarios, “self-destruction”, posits that we may destroy ourselves as a meaningful species through nuclear war or climate change before AI can help stop us, which you’d be forgiven for believing is the modal outcome at present.)
In other words, we may choose, rightly or wrongly, to build what I dub a “dAIty” — a form of artificial life with intelligence far, far in excess of our own. In the most visceral sense (strictly speaking), this artificial life form might become a physical part of ourselves (or we of it, depending where the line is drawn). At the other end of the spectrum, this super-intelligence may decide (again: perfectly sensibly, judging by our present decision-making as a species) to destroy us quite completely.
This life form merits the title ‘dAIty’ because in almost all the scenarios Tegmark lays out, it rapidly and irrevocably becomes almost infinitely more capable and insightful than humans. In one of Tegmark’s hypothetical outcomes, this passing-of-the-torch even has an almost emotional resonance, as aged and weary humans swell with pride as their descendants takes ascendance.
What is striking, moreover, is the extent to which we have already created the conditions for such a divine invention. The first phase, we might say, was that of devices: the sudden appearance of billions of connected gadgets in our houses, then in our pockets, and then on (and very soon in) our bodies. This laid the foundations for omnipresence. The second innovation was data: something which has been with us for centuries, but which in the past few decades has become enormously more voluminous and varied, or “Big”, as modish nomenclature dubs it. This enabled omniscience. The most recent shift has been towards automated decision-making, which has powered victories for machine over mankind in games of chess, Jeopardy! and most recently Go. This provides the most tantalising but most challenging condition: omnipotence.
An omnipresent, omniscient and omnipotent creature, separate from and more powerful than ourselves is, in truth, just a long-winded way of saying one word: God. Atheists might allege, of course, that the gods of contemporary religions are also man-made. But in the case of the dAIty, there would be no doubt or debate: this would, unquestionably, be our own creation. What is in serious doubt, and what should be up for debate, Tegmark argues, is whether our divine invention would play by our rules.