Complex Interdependencies at City-Scale

Simulation in the City

Imagine a city with no economic, cultural, or social pulse. No infrastructure, no employment, no civic essentials, and no population: a ghost city. Now imagine enough vacant urban floor space to completely cover Madrid. That’s over 600 million square meters: in a single country. And it is not alone.

Detroit. Kilamba. Valdeluz. Yebes. Stockton. Pripyat. All, to name but a few, cities with varying degrees of ghostliness: and all subject to a failure at a single layer within the complex system of systems on which their success depended. A single, often foreseeable, failure in the planning, infrastructure, socio-economic, environmental, political, industrial, or contextual layer within each of these cities created a cascading effect across every other layer that brought each city to its knees and, in some cases, to abandonment.

Just recently, a proposal to build a multi-zoned urban environment — modelled on a modest American city (which I can only presume is the Springfield of Simpsons fame!) — was green-lit. An environment containing tall office buildings, narrow alleys, parks, churches, an interstate highway, retail outlets, petrol stations, utility services — including telecommunications, a school, an airport, and a range of different residential units. An environment with urban, rural, and suburban zones spanning some 15 square miles and sized for a population of 35,000 people.

Only no one will live there.

With a completion target of 2020, this is of course a laboratory environment[1] within which future city constructs — in transportation, energy, logistics, security, and so forth — may be tested, evaluated, and tweaked. And all, as the company behind it says, ‘without the complication and safety issues associated with residents’. But don’t worry: ‘If the human experience is a key part of the test, we can add people at any time’.

Well, no. While you may be able to bus people in to play the part of citizens, residents, commuters, and tourists in order to introduce a degree of behavioural uncertainty, they have no vested interest in the space itself. They do not live there; their children are not schooled there; their health does not depend on its environment; their economic livelihood is not derived from it’s industries, and their domestic lives are not bound up in it’s success of failure. Yes, they may step out in front of the odd self-driving car but in the final analysis, they do not care.

When we consider a city as a system-of-systems, a layered environment pervaded by complex interdependencies and cascading effects, it is clear that removing vested humans from the space creates an imperfect representation of the system as a whole. Without those little wet bags of emotions the system is not complete: not only is there no human behavioural feedback, and the resulting cascade of effects, but neither is there the spectrum of ‘normal’ human behaviours that influences other layers within the system. For example, a simulation of Chernoybl safety systems that permits and responds to human self-determination based on experience is very different from one that assumes, as transpired under real world conditions, blind obedience to stated process and policy in response to political and economic pressures.

There is strong observational evidence that the inverse is also true: that a representation of the system that majors on the inclusion of humans, or at least their avatars, but that excludes the other layers in the system, opens up its users to what is called the ‘uncanny valley’. It just does not feel right: almost there, but not quite. Look around the backwaters of the much vaunted ‘3D Web’ (SecondLife, OpenSimulator, etc.) and you will find plenty of digital ghost ‘cities’ that remain pixel perfect but that have been all but abandoned by those who once ventured there or, in some cases, repopulated by the weird, the charmed, and the strange who are subverting such spaces for their own ends.

And if the majority of the denizens of the public ‘3D Web’ have moved on; where are those who once embraced it in the private sector?

While the laboratory outlined previously is, to my mind, making a fundamental error in not including vested citizens in their environment, they should be excused for simply perpetuating a long-standing problem with simulations. After multiple decades of internecine fighting — a ‘religious’ war that makes most computer science discussions look like a friendly chat over coffee — the simulation domain is polarised by methodology. The net result being a set of distinct, vertical simulations, representing distinct layers or systems under simulation, which exist without regard to each other.

Thus we find a ‘grab bag’ of models within which each is bounded to the physical system under study by that model. These are models that are not designed to interoperate and that are, in the vast majority of cases, driven by static, reference data that is months, if not years, old. They are models designed to answer a single set of questions, from a single point of view, rather than to contribute to a wider understanding of intersystem correlation, causality, and effect.

In a city context, if you are lucky, we find models of water supplies, of electricity lines, of transportation modes, of human behaviour, economic state, climate impacts, buildings and zoning, food and supply chains, and geographic context. These, and any other models that may exist for any given city, sit as representations of real world vertical domains: happily chugging away without reference to each other.

This is not how a city works.

Simulation in the city is, to my mind, ripe for innovation. There is a very real need to be able to leverage existing investments in simulation, regardless of the underlying methodologies or temporal constraints of each, as components in a holistic, intra- and intersystem representation. Part of this is the requirement to be able to integrate hard (ie: physical systems) models with soft (ie: socio-cultural, behavioural, and influence). There is an equal need, given the rise of so-called smart city services and the general availability of useful data, to be able to drive such a holistic representation with near real time[2] data in order to obtain actionable insight.

Most importantly, however, there is an underlying requirement to democratise simulation: to demystify and make more intuitive the somewhat arcane specialisations that simulation practitioners have created around their art. We need to allow domain experts to dynamically create, execute, and evaluate bespoke solutions that answer their questions when required rather than forcing them to wait for the development, testing, and release of such solutions. If a solar flare is imminent, for example, we want to know the impact of that on hospital intensive care — via electricity, water, communications, supply chain, transportation, and personnel layers — as soon as possible: not in six months time when the flare is a distant memory, apart from those pesky deaths it caused.

There is, in the United Kingdom alone, a lot of current attention — not to mention money — focused on the stimulation of Future, Smart, or Integrated Cities. I do not expect real innovations to arise as a result. When operating within the administrative entity of a city, politics and economics very quickly raise their heads and, in general, the result is a much-depleted outcome from what may have originally been intended. Moreover, given the same actors, what is developed and demonstrated for one city often fails to transition to another with the result that multiple cities have multiple solutions that operate and have benefit, of whatever scale, only within the boundaries of each city.

Instead of cities, therefore, I see the potential for real innovation arising from those who operate or are responsible for city-scale environments. Consider, for example, the United Kingdom rail network, Canary Wharf, a global multinational financial group, or a major international retailer. Such entities embrace — at a minimum — a collection of buildings, infrastructure, supply chains, humans, operational systems, and transportation; with a wealth of connected interdependency across all layers.

They are, in short, just like a city: only geographically dispersed.

Indeed, it is arguable that city-scale environments like this — spanning jurisdictional and geographic boundaries — are more complex than the average geospatially bounded city. Different localisations, applied across time zones in many cases, create non-uniform complexity that must be understood, managed, and interacted with if the operation of the environment is to be successful.

It is here that the opportunity arises. City-scale environments are, normally, commercial entities driven commercial imperatives. In the case of a global financial institution, for example, the commercial imperative to make money relies on a relatively simply set of functional applications that, in turn, depend on a complex, interdependent causal mesh that follows the sun across national borders, regulatory jurisdictions, cultures, and environmental contexts. A seemingly minor event at any point in any layer of that mesh can — and will — trigger cascading effects that have the potential to shut down or degrade any given functional application. And, of course, such events must be detected, analysed, and handled in near real time as the risk, threat, and opportunity context of the operation changes.

It is these considerations that raise the bar for simulation in city-scale environments.

At city-scale, simulation must evolve away from current siloed niches and embrace integration with both other dissimilar models and with near real time stimuli. Simulation must move away from the use of static data, other than for historical references, and towards an event-driven mode of operation. Beyond that, simulation must become both accessible to and intuitive enough for layperson use.

This change means rethinking simulation and its role in city-scale contexts. Instead of being a safe place in which to test theories and strategies, simulation becomes a form of digital imagination: testing the potential outcomes and ramifications of intervention options arising from the taking of a near real time pulse across all layers in a city-scale environment.

In this approach, simulation no longer ‘just’ tells you where the vulnerabilities are in any given vertical but, instead, is part of a tool chain that provides actionable insight to city-scale operations. Instead of providing insight into cyber assets and the impact of potential risks, as just one example, this new model actively and dynamically calculates the risk and threat impact of cyber effects on functional applications and, through simulation, then explores the potential ramifications of any of a relevant set of interventions that might be made. And all this is recalculated and reimagined every time there is a change in any real-world layer.

With our Resilio spin-out, we have been exploring the viability of this approach and, while I cannot tell you that it will definitely help you to avoid the next Detroit, Yebes, or Pripyat, I can already tell you that it gets us a lot closer to being able to sense, model, optimise, and take action, in near real time, across the entire scope of complex interdependencies, both hard and soft, at city-scale.

[1] The Center for Innovation, Testing, and Evaluation (CITE): Pegasus Global Holdings

[2] There is always some network latency when acquiring data.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.