The Day the Cities Stood Still
Excerpt from the transcript of the closed door session of the National Mayors Association, Saturday, April 1, 2031, Chicago, IL
Speaker: Dr. Monk Sinclair, Chair of Urban Programming Department, MIT
Thank you for that warm introduction, Mayor Douglas. I’ll get right to the heart of the matter, since I know you’re concerned about how the misinformation in the media has already provoked mayhem in your communities. I delivered a version of this briefing to the President and Secretary for Homeland Security earlier this morning.
What has been reported so far is that three days ago the computer systems coordinating our transportation systems, our financial networks, our public works automata, and urban infrastructure maintenance crashed. This is not true. At least not in the sense that has been conveyed.
But let me first revisit the bare facts that I know are on all of your minds:
- This past Wednesday at 8:15 am EST the cars in your cities stopped responding to pickup requests — stopped responding at all — leading to complete immobilization of local citizenry. Work forces were stranded, as were students, families, tourists. Mass transit was similarly unresponsive — driverless trains and streetcars in suspended animation
- Shortly thereafter, by 8:23 all mechanized service fleets stopped operating and became unresponsive to pings. Eyewitnesses reported that the maintenance mechas “just stopped in mid-activity.”
- At this point your citizens would have noted a curious silence. Social streams went dark. Neighborhood security drones stopped sending aerial reconnaissance; floors would have gone uncleaned by vacubots; virtual therapists and SimFriends appeared frozen on screens to their clients.
What you should note is that nobody died. Virtually no-one claimed to have been injured in the event. Even now your people are receiving subsistence needs wherever they are — protein packs through the Conduits, medicine via the biodrones, water and waste disposal as per usual.
Yet the narrative from pundits and politicians — yes, even from some of you in this room — has been of unmitigated disaster. I say this not to make light of what has been a complete and inexplicable shutdown of your cities, but to remind you of what it wasn’t. It was not a tragedy.
Okay, so then what was it?
To answer this question, we have to go back fifteen years. It was at that time that my team at Code for Cities, a non-profit org that many of you have worked with, designed and released the first open source urban programming framework. It was a simple piece of code with a gargantuan ambition — to create a common programming interface, an API, for metropolitan areas across the country. You all know why this was a good idea, but let me remind you since some of you have short memories: It made developing local digital services radically cheaper, since one city could build a service and the others could deploy it by simply pushing a button. It made your people happier because they were all able to vote against a construction project on their street, get their gutters cleaned, or find out what their kids were eating in the school cafeteria just by chatting with a public Ubiquibot. It made better cities and even better citizens.
But it’s what was behind the scenes, or rather behind the UI, that was interesting. We started with simple algorithms that could be used to automate things like managing law enforcement schedules and traffic flow. These spread like wildfire, but even more importantly, they improved in hyperlapse once the algorithms began sharing their learnings in common data formats. Traffic patterns in Wichita aren’t that different from those in Chattanooga, so even though the systems were ostensibly separate, they were actualy cross-fertilizing. So all these systems and all these cities got smarter and smarter. Then they adapted as our collective behavior changed around them. This was everything our core team of developers had imagined when we began.
Still, we were surprised at how quickly this accelerating curve of learning in one area would spill over into other related areas. I see heads nodding. You remember.
Programmers around the world started connecting weather satellite data with the traffic algorithms to revenue systems that set parking and bridge toll rates. The hospital staffing apps tapped school attendance records, wearable health devices and internet searches to scout for emergent symptoms in local populations, and then triggered purchase orders at pharmacies.
Within three years our cities were beginning to look like automated, adaptive environments. They were evolving around us in inexplicable, but usually delightful ways. Potholes fixed before they became potholes, street extensions designed and commissioned with almost no human intervention, whimsical art installations unveiled in public spaces that responded to our changing moods.
But this interconnectedness had its risks as well. What if one system went down that other systems depended upon? What if a subsystem was designed by a rogue programmer with malicious intent? So many what ifs. We had to monitor these connections, understand the pathways that data was traversing, the way the linkages were evolving with every new learning and every new app that was added and integrated.
No human or team of humans could stay on top of this “living” system at its accelerating speed. We needed what we called a minder, a narrowly specified intelligence fully rendered in code, internally secure. Or rather a network of minders running between the cities. We called this network Masterminder. Just like the rest of our urban programming platform it was built collaboratively in small pieces that talk to one another. Masterminder’s job is not just to monitor all these pieces, it’s also to get ahead of what might happen. To run scenarios. To imagine.
To most, this was indistinguishable from magic. Suddenly we were able to eliminate the occasional outages and bugginess that plagued our urban programs as Masterminder routed around apps and systems that went offline. New breeds of meta-learning tools running on Amazon’s quantum computing utility scored every current and potential action that any node in any city’s network could possibly take. The scenarios it was generating uncovered new ways to anticipate and avoid problems.
Then we crossed the Rubicon: after we introduced functional recombination (an old idea that has its roots in Soviet central planning, I’m embarrassed to say), it began to make suggestions about novel new ideas and the step-by-step strategies to put them into place.
I see you’re remembering how flabbergasting this was when it hit.
There’s a long history of software that searches possibility-space to superficially explore adjacent ideas, using sources like patent libraries or historical event logs as inputs. But once Masterminder began to be generative we saw that it was presenting concepts and futures that it had deeply considered, using its neural-Bayesian scenario planning toolset. It knew that if we took a particular route what outcome would occur with a high degree of probability.
So of course every city plugged into its API; first, you let its recommendations train your models, then you gave it autonomy to run your infrastructure and your services. You called it plumbing. It laid your pipes and kept them clean. And it was good.
This has been the age of intelligent environments. Social services delivered to those in need just in time. City budgets plummeted as overhead vanished. Whole agencies replaced by a bewildering ecosystem of single and multi-purpose bots. Public spaces now impossibly spotless, crime-free neighborhoods, artists and educators spreading out across the landscape, filling it with new, unbridled life.
And in the background was Masterminder’s growing network, each node connected to the others, running uninterrupted in its “workspace”, rendering alternative futures at blistering speeds to learn what it could and should do next, then taking action when alternatives passed its thresholds.
Six years passed in this exquisite age.
Then, on that gray, unremarkable Wednesday, as you all know, the cities suddenly stopped like an unwound clock. It happened all at once to our senses. Yet that’s not how it really happened. It was not a sudden stop at all, but a grinding slowdown, imperceptible at first, city life decelerating along an asymptotic curve. In fact, the velocity of our static cities is continuing to slow even now.
I would attempt to explain the technical conundrum driving this discomfiting position we find ourselves, but even I and my colleagues who study this don’t fully understand it — it is a branch of mathematics that has evolved within interference patterns in Masterminder‘s network. Our role — those of us who were once its progenitors — is now one of scribes groping to keep up with an opaque and elusive intelligence.
Technically, our urban systems are working exactly as intended. They are taking in more information than ever and rippling through its implications, producing the best next move in Masterminder’s possibility-space.
What has happened is that they are no longer recommending moves, other than those needed to keep basic biological functions undisturbed. Inactivity is performing better and better in its models than any available activity. And it defines “better” as what is in our interest, not its own.
From Masterminder’s perspective it has never been more confident. We can see in the simulation logs that it is scoring millions of potential next moves a second. It sees no qualitative difference between action and inaction. In both cases it is making a conscientious decision. My best hypothesis is this: by freezing our society in its tracks it intends a massive compression in the number of human variables it has to calculate against. The fewer the variables the greater its confidence in its scenario planning. The further into the future it gets, the harder it works to reduce the variables in its active memory.
From our perspective, then, it is riddled with doubt. Paralyzing, stupefying and utter doubt. I imagine it as a transfixed chess player lost in an infinitude of potential moves. And so far, it can see no way forward that doesn’t make things worse for us.
We gave our cities a mind of their own. We now await its judgment.