Systemic Security

SUMMARY: Study of dynamic event ‘signatures’ induced by collective behavior of autonomous programs heralds a necessary evolution of computer security in our AI/Big Data headed world.

For FY2012, I nominated for the yearly NSA Science of Security best paper competition a 2011 paper by Johnson et al, which appeared in Nature Science Reports almost two years later “Financial black swans driven by ultrafast machine ecology”

Johnson’s team investigated phenomenological ‘signatures’ of interacting autonomous computer agents in a real-world setting, namely electronic trading venues. Whether or not the hypothesis of an all-machine time regime characterized by frequent black swan events with ultrafast durations (<650ms for crashes, <950ms for spikes) holds, the study of event ‘signatures’ induced by collective behavior of autonomous programs heralds a necessary evolution of computer security in our AI/Big Data headed world.

SIDENOTE July 2015: An alternative explanation from my on and off discussions with Michael Wellman (who just won a $200k FLI prize to investigate AI risks to the financial system):

ultimately came to the conclusion that their findings are actually not evidence of machine ecologies. A much more plausible explanation for the spikes and crashes they observed are “market sweep orders” (MSOs), as explained by Golub et al. in a 2012 arXiv paper. I suspect that such chains of machine interactions do exist, but the signature is probably more complicated and as far as I know nobody has reliably identified them yet.

Adversarial Environments

Oscillations in salient performance metrics are a ‘macrosignature’ indicating adaptive opponents and/or an adversarial environment (mod “fallacy of consequent”)

Apprehensions of illegal aliens attempting to enter the US Southwest border. Factors inducing oscillations include detection/evasion means, economic climate in the US/non-US, Federal enforcement of immigration laws and more.

EMV (Chip-n-Pin) fraud rates exhibit this type of oscillations

Ross Anderson talk at VB 2015

Since aggregate behavior of even simple agents is highly unpredictable (and not consistent across time scales), no useful a-priori security guarantees anent the dynamics can be given. Rather, systemic computer security will make its debut: The study of signatures in phase space, and the requisite design of circuit breakers and rectifiers when warning bells appear

Surprising aggregate behavior of individual agents is not new. Pars pro toto I list Bell Lab ie Victor Vyssotsky, Robert Morris Sr., and Doug McIlroy’s Core Wars in the 1960s

From the linked image: Each program is a “virtual machine” consisting of computer code. Programs are written in special languages resembling assembly language. Programs are eliminated when they crash or run out of system memory (“core”) in which to operate.

Here’s a 2014 version Core Wars embedded in Minecraft

https://www.youtube.com/watch?v=GJAPgDtMKuQ

Conway’s Game of Life in the 1970s,

From the abstract of Yaroslavsky (2013): He made Game of Life rules stochastic. A number of new phenomena in the evolutionary dynamics of the models and collective behavior of patterns they generate are revealed, described and illustrated: formation of maze-like patterns as fixed points of the models, “self-controlled growth”, “eternal life” in a bounded space and “coherent shrinkage”.

and Koza’s LISP programs in the 1990s

From the foreword I wrote in J. Comp Vir (2009) On self-reproducing computer programs

All these remained interesting curiosities with no real-world ramifications. This detachment changed in the 21st century with the computerization of vast swaths of life, specifically the advent of automated black-box trading (i.e signatures of competing trading programs[1] chasing the same signals[2] causing 18,000 extreme price changes (black swan events) in this paper).


Formal models of strategic interactions of self interested agent exist in simplified settings which characterize phenomenological properties of the system in a Nash equilibrium (see non cooperatively optimized tolerance (NOT)[3]). However, in real-world interactions, human agents do not (and realistically cannot) compute Nash equilibria; algorithmic agents could, but: It will be of no use for complicated (i.e. real-life) games whose free parameter space induce high-dimensional chaotic attractors making ‘rational learning’ effectively random[4].

Slide from my Telematics talk, Arlington (VA), 2014

These indicators, together with the rapid AI-ization (and UAS-fication of everyday life with the specter of autonomous action chains looming[5]) makes it imperative that dynamic system warning bell signatures be identified and smart ‘circuit breakers’ designed.

I am of course not the first one to sense these dynamics. Didier Sornette sensitized me to endogenous super-exponential positive feedback mechanisms in Critical Market Crashes (2002)

Since humans are able to navigate complex games, maybe temporal-difference learning (based on a deferred ‘reward’) is adoptable for machine ecologies [6].


[1] NANEX, “Crop Circles” http://www.nanex.net/FlashCrash/CCircleDay.html

[2] Brown, “Chasing the same signals”, Wiley, 2010

[3] Vorobeychik, Y. et al, “ “Noncooperatively Optimized Tolerance: Decentralized Strategic Optimization in Complex Systems”, Phys. Rev. Lett. 107 (10), 2011, http://link.aps.org/doi/10.1103/PhysRevLett.107.108702

[4] Galla and Farmer, “Complex dynamics in learning complicated games,” PNAS 110 (4), 2013 http://www.pnas.org/content/110/4/1232.full.pdf+html

[5] Arkin, Ronald C., Patrick Ulam, and Alan R. Wagner. “Moral decision making in autonomous systems: Enforcement, moral emotions, dignity, trust, and deception.” Proceedings of the IEEE 100.3 (2012): 571–589.

[6] Sejnowski J., “Nature is cleverer than we are” in “This Explains Everything: 150 Deep, Beautiful, and Elegant Theories of How the World Works”. (Ed. J. Brockman), Harper 2013