AI Has a Rocket Problem

marc böhlen
The Startup
Published in
7 min readAug 3, 2020
Caravan AI

Some philosophers and artificial intelligence (AI) researchers like to use the metaphor of the rocket as way to describe the coming age of superintelligence, a hypothetical form of synthetically created intelligence that will far surpass the intelligence of even the most gifted humans.

A rocket, as the argument goes, is what we are building as we proceed towards the creation of superintelligence. Once completed, the rocket will take off in a thunderous cloud, shake the earth and leave us all behind, literally. Unless we change our approach to developing artificial intelligence that is. If we only re-imagined the development of superintelligence to include robust steering, mankind would not have to fear being overpowered by future AI as Max Tegmark explains in an inspirational TED talk.

Before we fret about the thunderous takeoff, it might make sense to have another look at that rocket.

Metaphors are only resting places for thoughts, yet they are powerful in subtle ways because they can guide thinking from afar; they can convince without having to prove a point. They can achieve the power of a meme, becoming formative and normative as well as blocking out other imaginaries while drawing little attention to themselves.

The rocket — and Rocket AI as its controlling agent — is an apt metaphor for the potential disruptiveness of a new technology, yet it is a terrible choice for thinking about how we might want to not only survive but live well with a game-changing technology. The rocket metaphor is not only a viable vehicle for escaping from a dull planet earth, as it is also a useful mechanism by which to escape responsibilities on earth. It is an effective metaphor to support a separation between the development of technical systems and living with them in the long-term; it is the perfect vehicle by which to justify innovation as an elite practice and to keep the development of technologies separate from the messy social world in which they inevitably operate.

Rocket AI goes boldly where no one has gone before. The fuel that powers Rocket AI is produced not only from inspirational metaphors, but also from success stories. Algorithms beat people at the classic games of chess, GO, and the new multiplayer shooting games; computers sound like people and algorithmically generated images look just like real people. These technical achievements fire up the escape velocity mentality of Rocket AI as pre-launch events offering a glimpse of the coming age of superintelligence. Is it a coincidence that industry uses the term moonshots for ambitious projects with out-of-this-world growth potential? Is it a coincidence that some of the most successful AI-centric entrepreneurs are heavily invested in a billionaires’ space race?

The rush to launch demands a price. The Rocket AI mentality has produced an artificial intelligence ecology that is not properly prepared to deal with the messiness of earthly existence. And we find ourselves tasked with creating postquem escape capsules and emergency pods to fix the deficiencies of Rocket AI; patches for privacy, transparency, accountability and shared governance in everyday AI systems. This makes for really bad privacy, transparency and accountability; literally an afterthought of Rocket AI. And it renders the promises of AI for a better Planet Earth unbelievable and counterintuitive; why worry about this planet if your efforts are geared towards building a machine to leave it behind?

Even within the AI community, the rocket story serves as more than a motivational sales pitch. The specter of an inadequately prepared launch into the unknown serves as justification for the research agenda of a precursor to superintelligence, namely artificial general intelligence and specifically the value alignment problem. The goal of alignment problem research is to ensure that superintelligence “wants the same things that humans want”; i.e. that superintelligent machines and not so superintelligent humans are somehow aligned in their thinking and actions. Failing to ensure this alignment, the argument goes, would allow for superintelligence to develop along whichever trajectory it deems most expedient to fulfill its own requirements, disregarding human needs and possibly doing away with its pesky human creators altogether. As if superintelligence would care what we feeble humans want. And even if the alignment machinery can somehow be built, it seems likely that superintelligence born of Rocket AI mentality will align nicely with only a few of us.

Not everyone is convinced of the value of worrying about the advent of superintelligence. Andrew Ng has called for an end to this nonsense and instead wants more research on the “urgent problems”, to wit, discrimination, bias, job loss, etc, etc. Ng is not the only AI expert calling for a refocus. Yet the way forward is contested. A conference venue dedicated to the socio-technical dimensions of AI finds its own research contributions critiqued for applying superficial technology fixes where fundamental policy revisions are needed.

But policy interventions are not a panacea for Rocket AI’s ills either.

AI as we know it is just not designed to deal with the tangled human world. When problems get prepared for algorithmic treatment they are made representational to the computer; a process that can strip out many of the details that make a given problem significant in the first place. Yet the very success of AI to generalize across domains rests precisely on its ability to do away with worldly details. Even the most basic operation in computer science, the assignment of numbers inherited from mathematics, is an abstraction from reality that sacrifices nuance. But that sacrifice is intentional, as it allows for numbers to be disassociated from objects. Or, as Alfred Whitehead put it, ‘the first man who noticed the analogy between a group of seven fishes and a group of seven days made a notable advance in the history of thought”1.

Without abstraction, computer science would lose its ability to generalize with ease.

So how far down do we have to dig to fix this the problem? Do we need a new approach to synthetic intelligence or just a refocus of AI? That debate will persist for a long time. In the interim, while the list of things that AI can’t fix grows, I suggest a preparatory step.

We need a different guiding metaphor for AI.

There is of course a history of alternate computing imaginaries. Most famously perhaps, Mark Weiser imagined computing as a walk in the woods. But he was probably walking all alone in that imagined forest, and the subsequent development of ubiquitous, ambient and other early alt-AI approaches all became entangled in the trappings of progress for a select few.

It is time to update the list of imaginaries. What if we imagined future AI not as a rocket but as a caravan?

More specifically, AI machinery could be imagined as a caravan of minivan-like pods on a long trip to a better place. And each pod would serve as a temporary home to a few people; a family in one, a few elderly folks in another, a boy and his dog in the next one; lots of equipment, supplies and baggage throughout. All the pods are connected to each other such that the fastest can only move along as quickly as the slowest. The caravan would be a state of the art mobility platform. A people-mover optimized by best practices for safety; both the safety of the passengers and of creatures watching the caravan of minivans move across the landscape.

If we use the caravan as a model we might think about an AI enabled trip in a different way, and about what happens while we are en route to our destination. Instead of slumbering in a rocket’s hibernation chamber, we might be looking outside the window while in the caravan. We might spend more time thinking about the comfort of the passengers, the various not-really-necessary-but-important unplanned stops we will have to make along the way etc., etc. We will think about the technical device as an experience-enabler and be in a better position to concentrate on what the trip wants to achieve, where we are going, and what we want to do when we arrive at our destination. We will think about possible complications at the planning stages instead of building patches after the fact.

Caravan AI might not seek first to devise an algorithm, optimize it for performance on the most advanced computing platforms, and then later deploy it onto the ‘real’ world. Rather, it might first look at the ‘real’ world, energy costs, potholes, traffic jams and all, then consider what kind of change is even desirable, which changes are feasible, and only then consider ‘development’; moving carefully, eyes, ears and collision sensors in alert mode for unexpected events.

There is no need to overstress the details of the caravan or any other metaphor for that matter. When Turing reflected on the possibility of synthetic intelligence, he proposed the idea developing intelligence from a “smaller mechanism”, a child machine that could eventually become intelligent after a teaching process. Turing was not worried about the details of child rearing or temper tantrums. Turing’s learning machines idea got the basics right. As unusual as that idea might have sounded in 1950, child machines set the stage for the concept of today’s machine-learning industrial complex.

It would be foolish to claim that Caravan AI — or any other alternate imaginary — could magically solve problems produced in the wake of Rocket AI, including flawed predictive policing, ugly AI nationalism, and digital monopolies, biased models, technological solutionism and naive media art, but that is not the point here. Before we can productively address all the ills of Rocket AI, we need a new leitmotif that points AI in a better direction.

You have to start somewhere. So here is an attempt: Catch & Release.

More on that project later.

1 Alfred Whitehead. Mathematics in the History of Thought, 1957.

--

--