Cyber Gravity: The Inevitable Collapse of our Technology

“Complex, interconnected systems tend to grow until they reach a critical state of collapse”

The U.S. identifies cyber security as one of its top four priorities.[1] Generally, we consider three types of threats: individuals, organized groups, or states.[2] But what if there is a fourth category of threats that we tend to be blind to; one that seems to be embedded in the complexity of modernity and relentless as gravity itself. The physicist Per Bak called it “Self Organized Criticality”,[3] sociologist Charles Perrow called it “Normal Accident Theory”,[4] and the famed options trader Nassim Taleb called it the over-optimization of a system.[5] Essentially, experts from differing fields are sayings that complex, interconnected systems tend to grow until they reach a critical state in which any change to the system has potential to lead to cascading failures.[6] The study of complex, interconnected systems is growing and evolving at a rapid pace in diverse domains, acquiring different names and acronyms that mean nothing to outsiders. However, the central idea is consistent, the bigger and more complex the systems are, the harder they fall, and they all fall eventually under their own weight.

This article will explore the background of cyber security, the major literature addressing Self Organized Criticality, why we tend to be blind to its existence, and some first steps to address in future policies.

Background

When addressing Cyber security threats, we should first define the term threat, which is commonly confused and combined with vulnerability. When properly used, the term “threat” refers to the actors exhibiting the behavior and capacity to exploit cyber vulnerability. In other words, the threat is what or who you are trying to protect against, and vulnerability is the weakness or gap in our systems. Fortunately, threats can be characterized into a framework of types of threats, motives of the actors, and specific sources of threats.

The motives of actors are an important piece of the framework to consider, such as crime, intelligence gathering, espionage, and ideological activism. Once again, motives help frame the threats as credible or not credible to your organization. If your organization does not deal in classified intelligence, it is unlikely you have to defend from an actor seeking to gather classified intelligence.

Finally, once the types of threats have been identified, they can be narrowed into specific sources, including

Insiders:

Current and former employees, contractors, and other organizational “insiders” pose a substantial threat by virtue of their knowledge of and access to their employers’ systems and their possible ability to bypass existing security measures through legitimate means.[1]

Thieves:

From simple e-mail types of crime to serious crimes like hacking, phishing, vishing, source code theft, cyber staking, Internet time theft, web jacking and cross-site scripting, the internet has acted as an alternate avenue for the criminals to act with relative obscurity. Cyber criminals are targeting the social and professional networks and directing efforts at vulnerabilities in mobile platforms like smartphones and tablets.[2]

Terrorist groups:

Evidence suggests that technology is increasingly seen as potential tool for terrorist organizations. This is leading to the emergence of a new threat in the form of ‘cyber terrorists’, who attack technological infrastructures such as the Internet in order to help further their cause.[3]

Political activists

The internet has created a base for an unparalleled movement for anti-war/pro-peace and social justice. Targeted organizations range from private banks to the FBI.[4]

Hostile states:

Cyber warfare is part of “asymmetric” warfare in which the U.S. is vulnerable due its greater reliance on high-tech, networked systems.[5]

Fortunately, this framework of assessing threats is not novel, and generally reflects the same attributes as in the physical world. In other words, the actors, motives, and specific source of threats we have been dealing with for centuries are the same in the cyber world.

However, there is a vast difference between defending the cyber realm and the physical realm from threats. It is very easy for us to visualize physical threats; however threats in the virtual cyber realm require a level of understanding and expertise that is not common. Therefore, cyber threats may include novel threats we have not acknowledged yet and lack understanding of.

Self-organized Criticality: The Sand Pile Theory

Physicists Per Bak, Chao Tang, and Kurt Wiesenfeld published the results of their sand pile simulation, “Self-organized criticality: an explanation of 1/ƒ noise” in 1987.[6] The authors used a sand pile as a metaphor for complex, interconnected systems. In a sand pile, each grain of sand is insignificant by itself, however its interconnectedness to the pile as a whole is crucial; move one grain and you could start an avalanche. The authors observed that as you piled more sand on, the grains would self-organize, settling where they fit best into the overall system. The grains would continue to self-organize until they reached a critical state, and then suddenly the failure of one grain to carry its weight would cause avalanche. The significance of the study was twofold; 1) The avalanches could not be predicted in timing or size, and 2) The avalanches were inevitable as the pile grew.[7]

In 2006 Jingwei Wu , Richard Holt published “Seeking empirical evidence for self-organized criticality in open source software evolution”, using the Sand Pile model to better understand the evolution of open source software development.[8] They relate a change request to a grain of sand onto the larger pile and the eventual change to the software as a land slide. In the end, the authors conclude that software evolution does in deed follow SOC dynamics.[9]

But the term evolution seems to be a key word to what SOC is really showing us. SOC can help us understand the growth and collapse of modern financial systems, wild land fires, power outages, and oil spills. Per Bak himself used it to describe Darwin’s evolutionary theory of life.[10]

But if self-organized criticality has been evident since Darwin’s evolutionary theory of life, why has it suddenly become a focus of interest in the last 30 years? Because, until recently, highly connected complex systems only existed in nature. However, man is now creating highly complex interconnected systems in the cyber realm.

A Convergence of Worlds

Options trader and author Nassim Taleb used the phrase the over-optimization of a system to describe why the modern world experiences so many large catastrophes.[11] To help us understand our own evolution, Taleb describes two metaphorical worlds; Mediocristan and Extremistan.[12] Mediocristan is a world we are accustomed to, where normal, expected things happen and whose probability is easy to compute and understand; such as the weather trends for the last 200 years or the average height of humans.[13] Extremistan is a world of the apparently impossible and rare events that are increasingly hard to predict;[14] such as the financial collapse of 2008, or the rapid rise and fall of Yahoo. In other words, Mediocristan is a world of independent, predictable systems. Extremistan is a world of highly interconnected complex systems.

Taleb’s simple analogy helps us understand why our accustomed tools for studying and predicting a world of independent systems may not work as we continue to create complex, interconnect systems.

Building Extremistan

Charles Perrow published “Normal Accidents” in 1984 where he describes the “unanticipated interaction of multiple failures”[15] in complex and interconnected systems. Perrow states we spend our lives reacting to and fixing normal accidents.[16] Each response creates an improvement or optimization in the system, which in-return increases the complexity of rules, increases the connectivity of each subsystem, and increases the reliance of each component. Like a sand pile model, we continue to slowly pile more methodological grains of sand onto our complex systems until a small land slide accords. Then we shore up the land slide, increasing the pressure on all the grains of sand in the system, and continue to pile more sand on until we reach criticality.[17] The sand pile has become so shored up, so hardened, so optimized, so interconnected, that failure of any part can cause an avalanche. Is this the Extremistan Taleb spoke of; an optimized, interconnected, tempered world punctuated by unpredictable, inevitable avalanches? If it is, we may be blind to it.

The Unpredictable Problem

When we think of failing systems, we tend to measure the likelihood of failures with probability. Unfortunately, when we think of probability we tend to revert back to what we learned in high school or college; the bell curve and normal distributions.

The bell curve and normal distributions are based on Pascal’s Triangle, aptly named after Blaise Pascal, a 17th century mathematician.[18] Pascal’s solution for determining the probability of events occurring is elegant and reliable in the proper context. But the context is the key; Pascal’s solution for probability was intended for games, simulated and bounded scenarios created by men. Pascal was not trying to solve probability in the real world. Fortunately for us, Pascal’s model of probability is adapted excellently for Mediocristan, and has served us well for 350 years.[19] Unfortunately, it fails in Extremistan.

The foundation for Pascal’s game theory is to map out all of the possible outcomes so you know how many variables you have. For example, the roll of a six-sided die indicates 6 possibilities; 1,2,3,4,5, or 6. From here you can now calculate the odds or rolling a “3”: 1 in 6 or 16%. From this foundation you can add complex conditional events, such as rolling 3 after you roll a 4 and so on.

But to apply this model to the real world is a fallacy because of the first step: map out all of the possibilities. In the real world we don’t know all of the possibilities because there may be events we have not seen yet. In other words, in Pascal’s game model we know we are rolling a 6-sided die.

Opposed to in the real world, we don’t know how many sides the dice have; we only know the sides we have seen. The point is that, the real world produces results outside of our imagination that seem impossible to us before we experience them; such as 9/11, the financial meltdown of 2008, and first discovery of a black swan.

The reliance on Pascal’s model in Extremistan not only blinds us to the possibilities of extreme events, it may give us confidence that they are impossible and should not be considered.

Establish a common language

Vehicles, buildings, smart phones, the “internet of things”; everything is becoming interconnected and complex by our choice. Cyber security is focused on malicious actors compromising the system, but what if the system becomes compromised by its own complexity? What do we call a cyber-threat that does not involve a person, but simply an inevitable, unpredictable failure that does not adhere to our predominant models of probability? And that is where I would like to start, by giving the problem a name in the cyber realm. I propose we start with “cyber gravity”, since it suggests, like Self-Organized Criticality and normal accident theory that the bigger more interconnected our systems get, the harder gravity will pull them.

Response

Once you have named the problem, now you can address it in policy. Prevention of cyber gravity may be futile; the only way to truly prevent Self-Organized Criticality is to stop developing the system. That is not an option. The paradox is as technology systems become more efficient, less expensive, and more capable, they also become more self-organized.[20] In other words, the better our systems become the harder gravity pulls on them. Therefore, we should acknowledge that a collapse will happen, and focus our effort on how to respond to a collapse.

Each new peace of technology implemented and connected should be evaluated on its cyber gravity and how to respond to its collapse. A response policy should be dynamic and ready to adapt, because the system will be constantly self-organizing, increasing its gravity. In the words of Per Bak himself, “equilibrium equals death. Change is catastrophic. We must adapt because we can’t predict.”[21]

Conclusion

Cyber threat framework typically focuses on individuals, organized groups, or states. However, the theory of self-organized criticality suggests that there is a fourth threat; the tendency of complex, interconnected systems to unpredictably collapse on their own. The study of complex interconnected systems has spanned many domains, and the conclusions are similar: 1) Collapses are not predictable in timing or size, and 2) Collapses are inevitable as the system grows. In addition, we tend to be blind to the probability of the collapses due to our dominant model of probability which is not suited for predicting events we have never experienced.

As a policy response to the problem of increasingly complex interconnected systems I suggest that we start by establishing a common term, and name the problem. I suggest “Cyber Gravity”. In addition, it appears preventing cyber gravity is as difficult in preventing physical gravity; Therefore, policies should focus on responses to collapses in the system not prevention.

[1] Frank L. Greitzer et al., “Combating the Insider Cyber Threat,” Security & Privacy, IEEE 6, no. 1 (2008), 61–64.

[2] Raksha Chouhan, “Cyber Crimes: Evolution, Detection and Future Challenges,” The IUP Journal of Information Technology 10, no. 1 (2014), 48–55.

[3] SM Furnell and Matthew J. Warren, “Computer Hacking and Cyber Terrorism: The Real Threats in the New Millennium?” Computers & Security 18, no. 1 (1999), 28–34.

[4] Richard Kahn and Douglas Kellner, “New Media and Internet Activism: From the’Battle of Seattle’to Blogging.” New Media & Society 6, no. 1 (2004), 87–95.

[5] Robert Marquand and Ben Arnoldy, “China Emerges as Leader in Cyberwarfare,” Christian Science Monitor 14, no. 2007 (2007), 1.

[6] Bak, Tang and Wiesenfeld, Self-Organized Criticality: An Explanation of the 1/F Noise, Vol. 59APS, 1987), 381.

[7] Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World, Agile Press, 2011).

[8] Jingwei Wu and Richard Holt, Seeking Empirical Evidence for Self-Organized Criticality in Open Source Software Evolution (2006).

[9] Jingwei Wu and Richard Holt, Seeking Empirical Evidence for Self-Organized Criticality in Open Source Software Evolution (2006).

[10] Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World, Agile Press, 2011).

[11] Taleb, Antifragile: Things that Gain from Disorder, Random House LLC, 2012).

[12] Nassim Nicholas Taleb, The Black Swan:: The Impact of the Highly Improbable Fragility, Random House LLC, 2010).

[13] Nassim Nicholas Taleb, The Black Swan:: The Impact of the Highly Improbable Fragility, Random House LLC, 2010).

[14] Nassim Nicholas Taleb, The Black Swan:: The Impact of the Highly Improbable Fragility, Random House LLC, 2010).

[15] Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World, Agile Press, 2011).

[16] Theodore Gyle Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World, Agile Press, 2011).

[17] Theodore Gyle Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World, Agile Press, 2011).

[18] Theodore Gyle Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World, Agile Press, 2011).

[19] Theodore Gyle Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World, Agile Press, 2011).

[20] Theodore Gyle Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World, Agile Press, 2011).

[21] Theodore Gyle Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World, Agile Press, 2011).

Show your support

Clapping shows how much you appreciated Eric Saylors’s story.