SUMMARY: Study of dynamic event ‘signatures’ induced by collective behavior of autonomous programs heralds a necessary evolution of computer security in our AI/Big Data headed world.
For FY2012, I nominated for the yearly NSA Science of Security best paper competition a 2011 paper by Johnson et al, which appeared in Nature Science Reports almost two years later “Financial black swans driven by ultrafast machine ecology”
Johnson’s team investigated phenomenological ‘signatures’ of interacting autonomous computer agents in a real-world setting, namely electronic trading venues. Whether or not the hypothesis of an all-machine time regime characterized by frequent black swan events with ultrafast durations (<650ms for crashes, <950ms for spikes) holds, the study of event ‘signatures’ induced by collective behavior of autonomous programs heralds a necessary evolution of computer security in our AI/Big Data headed world.
SIDENOTE July 2015: An alternative explanation from my on and off discussions with Michael Wellman (who just won a $200k FLI prize to investigate AI risks to the financial system):
ultimately came to the conclusion that their findings are actually not evidence of machine ecologies. A much more plausible explanation for the spikes and crashes they observed are “market sweep orders” (MSOs), as explained by Golub et al. in a 2012 arXiv paper. I suspect that such chains of machine interactions do exist, but the signature is probably more complicated and as far as I know nobody has reliably identified them yet.
Oscillations in salient performance metrics are a ‘macrosignature’ indicating adaptive opponents and/or an adversarial environment (mod “fallacy of consequent”)
EMV (Chip-n-Pin) fraud rates exhibit this type of oscillations
Since aggregate behavior of even simple agents is highly unpredictable (and not consistent across time scales), no useful a-priori security guarantees anent the dynamics can be given. Rather, systemic computer security will make its debut: The study of signatures in phase space, and the requisite design of circuit breakers and rectifiers when warning bells appear
Surprising aggregate behavior of individual agents is not new. Pars pro toto I list Bell Lab ie Victor Vyssotsky, Robert Morris Sr., and Doug McIlroy’s Core Wars in the 1960s
Here’s a 2014 version Core Wars embedded in Minecraft
Conway’s Game of Life in the 1970s,
and Koza’s LISP programs in the 1990s
All these remained interesting curiosities with no real-world ramifications. This detachment changed in the 21st century with the computerization of vast swaths of life, specifically the advent of automated black-box trading (i.e signatures of competing trading programs chasing the same signals causing 18,000 extreme price changes (black swan events) in this paper).
Formal models of strategic interactions of self interested agent exist in simplified settings which characterize phenomenological properties of the system in a Nash equilibrium (see non cooperatively optimized tolerance (NOT)). However, in real-world interactions, human agents do not (and realistically cannot) compute Nash equilibria; algorithmic agents could, but: It will be of no use for complicated (i.e. real-life) games whose free parameter space induce high-dimensional chaotic attractors making ‘rational learning’ effectively random.
These indicators, together with the rapid AI-ization (and UAS-fication of everyday life with the specter of autonomous action chains looming) makes it imperative that dynamic system warning bell signatures be identified and smart ‘circuit breakers’ designed.
Since humans are able to navigate complex games, maybe temporal-difference learning (based on a deferred ‘reward’) is adoptable for machine ecologies .
 Brown, “Chasing the same signals”, Wiley, 2010
 Vorobeychik, Y. et al, “ “Noncooperatively Optimized Tolerance: Decentralized Strategic Optimization in Complex Systems”, Phys. Rev. Lett. 107 (10), 2011, http://link.aps.org/doi/10.1103/PhysRevLett.107.108702
 Galla and Farmer, “Complex dynamics in learning complicated games,” PNAS 110 (4), 2013 http://www.pnas.org/content/110/4/1232.full.pdf+html
 Arkin, Ronald C., Patrick Ulam, and Alan R. Wagner. “Moral decision making in autonomous systems: Enforcement, moral emotions, dignity, trust, and deception.” Proceedings of the IEEE 100.3 (2012): 571–589.
 Sejnowski J., “Nature is cleverer than we are” in “This Explains Everything: 150 Deep, Beautiful, and Elegant Theories of How the World Works”. (Ed. J. Brockman), Harper 2013