Roadblocks to Computational Modeling and Theory Development in Strategy — and a Potential Way Forward

So, readers that have been following my work consistently have noticed that I often describe my work as “computational modeling of strategy” or “computational modeling of strategy and tactics.” Yet, however, I seem to spend much more time writing, blogging, tweeting, etc than in active research (beyond coursework). Why is that? Well, I have been doing a lot of tinkering for a year on the research problems technically as well as — more importantly —pondering them intellectually since fall 2013. I am not an especially masochistic researcher — the challenge has simply been to put the pieces together. “Computational” and “strategy” (at least the way I learned the latter) do not go together that easily. As I have often noted, I came into my current PhD program from a background in strategic studies, whose research problems, culture, and norms are wholly different from that of the mainstream social sciences. Moreover, strategic studies researchers have an instinctive suspicion of mathematics and computer modeling.Hence, the way to produce new work in strategic theory utilizing computational models and simulation and the metaphor of computation more broadly is not easy or readily apparent.

In particular, the methodological formalism that people in my field specialize in — bottom-up agent models of social processes — is an especially poor fit for strategic studies despite sharing some key things in common with orthodox strategic concepts such as the idea of conflict as possessing a “logic” and “grammar” and the shared emphasis on reconstructive explanation of overall outcomes. I am, of course, not deterred by these difficulties — as a researcher it’s my job to try solve them, but I will elucidate them here just as an update as to what I’m thinking about these days research-wise.

I have realized that there are two core problems to be surmounted in bringing computational modeling to strategic theory: the representation of action selection mechanisms and the representation of the strategic environment. Social science work often stumbles in both areas, and more qualitative and philosophical strategic studies research does not offer much of a guide either for the narrow task of computational modeling. While both of these challenges are difficult and I do not expect to solve them in my doctorate — or my lifetime — - I feel that in eludicating them here I at least understand the nature of the beast I am dealing with.

I have a tentative idea as to how to begin to tackle both as I have formulated them, which I have been tinkering with computationally and mathematically as I wrestled with the larger ideas surrounding the method and approach I am building. Future entries will build on the nub of the approach I draw out in Section III as I start to seriously develop it in a theory-driven way.

The first issue is obviously how to represent both strategic behavior and the process that generates it. This may seem obvious to some but when you truly dig in it is an extremely thorny matter. Strategic behavior is a set of actions, and this requires specifying an action selection mechanism.

As Everrett Dolman and Lawrence Freedman have recently argued, what exactly a strategy is and how to understand the distinctions between tactics and strategy has been systematically muddled. Though, of course, I doubt that a consensus ever existed to begin with. Today, strategy has often been analyzed in the military from the following perspectives: the notion of strategy as a means-end reasoning process, a aggregate term for a host of subfunctions that differ according to levels of certainty, a means of exerting control over an adversary, a mechanism of creating advantage over an adversary, a way of shaping and guiding military means in anticipation of future events, a purpose-built bridge between political motivations and coercive force, a way of organizing state power to achieve desired aims, and a design for how to evolve and learn while isolating an adversary and enlargening one’s own options and allegiances. A lot of different definitions, no? And all of them have widely differential implications for translating strategic theory into a computational model.

In the social sciences the same problem exists. The game-theoretic definition of a strategy — a way of assigning discrete actions to take given the expected behavior of another agent — -differs from the sociological notion of strategy as a repetoire of standard operating procedures consisting of both primitive actions and mechanisms for selecting primitive actions. Then there is the organizational theory definition of strategy, which is aptly described in the military context in a Armed Forces Journal article by Ionut C. Popescu. To some extent, all of these definitions share some core things in common: the need to act based on an anticipation of how the opponent will act given his or her beliefs about how you will act and the notion of strategy as a narrative template for how to act. And I think, if we solely delimit the domain of application to the military realm (the traditional environment of strategic studies), everyone will agree with Clausewitz’s basic formulation that strategy is the art of using discrete actions of violence (tactics) to accomplish a political aim. Or, as a friend often says, strategy is ‘done’ as tactics. With that throat-clearing out of the way, though, the problem is still not solved.

Clausewitz, like many game theorists, argued for the notion of war as a formalized game structure. Moreover, like von Neumann and Morgenstern, Clausewitz was also inspired by probabilistic games of incomplete information (poker) in which the rules were simple but dealing with uncertainty inherent in decisionmaking was not. Probabilistic games of incomplete information such as poker involve inferences about the partially observable behavior of an opponent. Consider, for example, the basic strategic problem faced by Germany in 1944 of allocating resources to either Calais or Normandy. The Germans knew an invasion could happen, they did not know where it would happen, which can be represented game-theoretically as seen below:

For those that are not mathematically inclined, the term “mixed strategy” is the assignment of probabilities to a pure strategy (which is a mechanism of choosing an action based on the expected behavior of the opponent). Muddling the waters even more is that some situations may necessitate treating the problem as a behavior strategy, which assigns probabilities to actions rather than algorithms for choosing actions. Given the seeming isomorphism between Clausewitz and individualist game theory representations in social science, one may wonder why we ought not to just literally translate Clausewitzian maxims into game-theoretic problems and go from there as the simplest solution. Game theory also supports other non-Clausewitzian strategic formalisms; if you are a Wylie fan certain classes of games may be understood as optimal control problems (the way in which systems determine policies for controlling their own operation and adaptation in dynamic environments). All of this, though, ignores a fundamental question: if strategy is a mechanism of selecting actions, what is an “action” to begin with?:

Action selection is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, the action selection problem is typically associated with intelligent agents and animats — artificial systems that exhibit complex behaviour in an agent environment. The term is also sometimes used in ethology or animal behaviour.

A basic problem for understanding action selection is determining the level of abstraction used for specifying an ‘act’. At the most basic level of abstraction, an atomic act could be anything from contracting a muscle cell to provoking a war. Typically for an artificial action- selection mechanism, the set of possible actions is predefined and fixed. However, in nature agents are able to control action at a variety of levels of abstraction, and the acquisition of skills or expertise can also be viewed as the acquisition of new action selection primitives.

Here we see several problems with both social scientific and military-strategic approaches to strategy. I will tackle them in order.

First, social scientific representations often seen in mathematical and computational models leave a lot to be desired. Social scientific notions of strategy are defined around the minimal units of choice — discrete actions such as “cooperate” or “defect.” Let’s illustrate this by going back to the Normandy example. A German strategy to block Allied invasion into the European heartland would consist not only of a discrete choice as to which coastline to defend — it would be a theater-level schema for specifying both cumulative and sequential elements of the strategic defense. Sequentially, there would need to be a set of high-level actions to perform in order to keep the Allies out defined on the order of higher-level maneuver units such as the division. However, the organization of fortifications and resources would also have an cumulative effect on the Allies — wave after wave of failed Allied invasion attempts that crashed against reinforced Fortress Europe would gradually break down the Allied will and capability to invade.

And then there is the notion that German strategy for Normandy was a subset of overall German strategy; resources that could have been allocated to the Western European defense were allocated to the Eastern Front. So what level of explanation are we choosing? Is the “strategy” the entire German strategy for the war, with decreasingly less abstract action primitives (e.g., “prioritize Eastern Front” and “defend Calais or Normandy” as different points in a long, hierarchal continuum of representational choice)? And what about the issue that strategy might also include decisions about what kind of economic and industrial choices to make as the backbone of the entire war effort? Social scientific ways of characterizing “strategy” offer no guide for solving these problems. It may be countered that subgame perfect equilibrium, by decomposing games into subgames, may handle this issue. Yet the notion of a subgame, while helpful, merely just kick the issue of where to cut off abstraction down the road. All abstraction is inevitable, but where to draw the line? How much ought to be modeled?

Additionally, as Barry D. Watts noted in his criticism of Bombing to Win, abstracting these elements into discrete behaviors or choices is problematic because a strategy may pursue multiple goals or behaviors either concurrently or over different levels of space and time. Moreover, the assumption of environmental determinism in many social science computational models assumes that for every input that an entity being modeled faces, there is a identifiable mapping to an output behavior. This is unfortunate because it further assumes a finite set of mutually exclusive and easily realizable situation-action mappings; in the real world there is a combinatorial expanse of possible actions an agent must consider at any one moment as well as the fact that goals and behaviors may conflict for resource allocations. For example, I cannot both type this blog post and write an essay for War on the Rocks in parallel; I only have two hands and one brain. Nor is trying to drive while talking on the cell phone that great of an idea either. The same problem also re-occurs when it comes to strategic learning in game theory. What’s the learning problem? Well, whenever you have a complex entity that performs multiple tasks, learning is only possible when it is constrained in some shape or form. In robotics and animal behavior, the problem of learning lies in the combinatorial explosion of possible things to learn from interaction with an environment. Adaptation — broadly construed, faces severe limitations:

The action selection mechanism (ASM) determines not only the agent’s actions in terms of impact on the world, but also directs its perceptual attention, and updates its memory. These egocentric sorts of actions may in turn result in modifying the agent’s basic behavioural capacities, particularly in that updating memory implies some form of learning is possible. Ideally, action selection itself should also be able to learn and adapt, but there are many problems of combinatorial complexity and computational tractability that may require restricting the search space for learning.

Finally, the notion of humans and other animals as purely purposeful and goal-guided also, as some scientists have argued, runs counter to basic notions of evolutionary design of how entities deal with complex environments that pose multiple demands. Intelligent behavior as a whole is a mixture of state-based (automaton) and autonomous (purposeful) behavior, with motivation and drive as a means of deciding and organizing behaviors. Moreover, agents have to learn to solve multiple tasks and transfer knowledge across multiple contexts:

Note that the criterion we have in mind here is not specialized to a single task, as is often the case in applications of machine learning. Instead, a biological learning agent must make good predictions in all the contexts that it encounters, and especially those that are more relevant to its survival. Each type of context in which the agent must take a decision corresponds to a “task”. The agent needs to “solve” many tasks, i.e. perform multi-task learning, transfer learning or self-taught learning (Caruana, 1993; Raina et al., 2007). All the tasks faced by the learner share the same underlying “world” that surrounds the agent, and brains probably take advantage of these commonalities. This may explain how brains can sometime learn a new task from a handful or even just one example, something that seems almost impossible with standard single-task learning algorithms.

Note also that biological agents probably need to address multiple objectives together. However, in practice, since the same brain must take the decisions that can affect all of these criteria, these cannot be decoupled but they can be lumped into a single criterion with appropriate weightings (which may be innate and chosen by evolution). For example, it is very likely that biological learners must cater both to a “predictive” type of criterion (similar to the data- likelihood used in statistical models or in unsupervised learning algorithms) and a “reward” type of criterion (similar to the rewards used in reinforcement learning algorithms). The former explains curiosity and our ability to make sense of observations and learn from them even when we derive no immediate or foreseeable benefit or loss. The latter is clearly crucial for survival, as biological brains need to focus their modeling efforts on what matters most to survival. Unsupervised learning is a way for a learning agent to prepare itself for any possible task in the future, by extracting as much information as possible from what it observes, i.e., figuring out the unknown explanations for what it observes.

If this is difficult to describe when it comes to individual agents, it becomes very, very difficult when it comes to notions of strategic learning in organizations or even small groups. When social scientists, for example, claim that the Army “learned” counterinsurgency, they are often skimpy about the mechanism of learning or how a multifarious organization can learn the various tasks, sub-tasks, and sub-sub-tasks inherent in performing a vague and often highly amorphous concept such as “counterinsurgency.” They often just simply assert it and wave their hands.

This essay, so far, has elucidated a host of problems with social scientific representations of strategy, but many of the same problems are also true for traditional strategic theory and analysis. Distributed entities can be abstracted with an action selection procedure:

One fundamental question about action selection is whether it is really a problem at all for an agent, or whether it is just a description of an emergent property of an intelligent agent’s behaviour. However, the history of intelligent systems, both artificial (Bryson, 2000) and biological (Prescott, 2007) indicate that building an intelligent system requires some mechanism for action selection. This mechanism may be highly distributed (as in the case of distributed organisms such as social insect colonies or slime moulds) or it may be one or more special-purpose modules.

The biggest issue is, first and foremost, whether to use a homogeneous or heterogeneous agent assumption when representing each side in the conflict. Andrew Marshall and Albert Wohlsetter both argued for the notion of representing each faction as a system or organization; the strategy generating process is how the components of the system produce a strategy given the system’s internal state, goals, and perceptions of likely opponent behavior. That may sound like an intuitive solution, but it runs against a thorny problem — there is little evidence that strategic actors themselves use such representations when making decisions.

We use stereotypes, exemplars, and other aggregate simplifications such as “The Germans” and “The Russians” that functionally aggregate all of the variegated people in such groupings into a representative agent. If we do X, “The Russians” will do Y, and so on. If we decide to use ____ combined arms tactic, the Japanese will do ___, and so on. This kind of folk psychology, by the way, not unique to strategic practice and policy practice — it is enshrined in our law with the notion of corporate personhood and international law with the notion of state personhood. Corporations in particular are not just legally thought of as people, they are also often behaviorally regarded as such. People functionally treat corporations as having minds:

Cases like this one have long puzzled philosophers. In everyday speech, it seems perfectly correct to say that a corporation can “intend,” “know,” “believe,” “want” or “decide.” Yet, when we begin thinking the matter over from a more theoretical standpoint, it may seem that there is something deeply puzzling here. What could people possibly mean when they talk about corporations in this way? ….One of our most basic psychological capacities is our ability to think about things as having mental states, such as intentions and beliefs. Researchers refer to this capacity as “theory of mind.” Our capacity for theory of mind appears to be such a fundamental aspect of our way of understanding the world that we apply it even to completely inanimate entities.

So the question then becomes, if to us, “the Russians” intend to do X, believe Y, and want Z (all terms that describe individual psychological attributes), what is the justification for using notions of civil-military relations, bureaucratic politics, and governmental decisionmaking processes as a whole as an explanatory tool to show why the “the Russians” — in a particular strategic scenario —-took action X or Y? For historians, who have the luxury of often being ad hoc about levels of analysis and the means by which they characterize state variables in the explanation, this problem can be avoided. But for those that are trying to generalize over a large set of cases or create a template explanation that could be applied to multiple situations, consistency becomes an enormous problem.

A secondary issue lies in the way that strategic theorists trace the mechanisms by which strategic effect is produced. As I have noted previously, many such explanations retroactively impute motives, goals, intentions, and other semi-observable to completely unobservable properties to agents. This revealed preference approach, common to both social science and strategic studies, ignores that preferences are often reinforced or constructed by behavior as well as the problem that preferences may be state-dependent. We also have little idea whether the suggested causality behind such explanations is correct. In practice, such questions are ignored — making strategic theory often a cousin of disciplines that posit some high-level quality to explain observed behaviors and outcomes that quickly breaks down when the explanation and the outcome are traced or exhaustively reconstructed. Returning back to the notion of “theory of mind,” it ought to be obsered that actors involved in strategic scenarios have the same problem. They do not know for sure whether the motives, intentions, and beliefs they impute to opponents, friendlies, and neutrals are correct or not. Nor do they know for sure the true effects of their behaviors or the likely effects of their expected behaviors. Finally, it is simply implausible that they recursively simulate all of the expected actions of the opponent given the opponent’s expectations about them — that leads down to expectations of expectations of expecations of expectations.

I could go on but to my interests these are the most relevant problem when specifying strategic agents and their behavioral generation processes. The other issue lies in their relationship to the strategic environment.

A related problem that is specific to the act of computational modeling lies in the representation of the individual or entity being modeled’s relationship to the environment. Clausewitz’s notion of the commander’s genius and intuition is interesting because it bridges two discrete traditions in cognitive science and psychology concerning the nature of decision behavior:

Now, if it is to get safely through this perpetual conflict with the unexpected, two qualities are indispensable: in the first place an understanding which, even in the midst of this intense obscurity, is not without some traces of inner light, which lead to the truth, and then the courage to follow this faint light. The first is figuratively expressed by the French phrase coup d’oeil. The other is resolution. As the battle is the feature in war to which attention was originally chiefly directed, and as time and space are important elements in it, and were more particularly so when cavalry with their rapid decisions were the chief arm, the idea of rapid and correct decision related in the first instance to the estimation of these two elements, and to denote the idea an expression was adopted which actually only points to a correct judgment by eye. Many teachers of the art of war also then gave this limited signification as the definition of coup d’oeil. But it is undeniable that all able decisions formed in the moment of action soon came to be understood by the expression, as for instance the hitting upon the right point of attack, etc. It is, therefore, not only the physical, but more frequently the mental eye which is meant in coup d’oeil. Naturally, the expression, like the thing, is always more in its place in the field of tactics: still, it must not be wanting in strategy, inasmuch as in it rapid decisions are often necessary. If we strip this conception of that which the expression has given it of the over figurative and restricted, then it amounts simply to the rapid discovery of a truth, which to the ordinary mind is either not visible at all or only becomes so after long examination and reflection.

This notion is common in strategic theory enough to abstract it into a general notion of the commander as what we might call a “fast and frugal” decisionmaker. Decisions of enormous complexity must be made quickly; and often due to simple and cognitively frugal decisionmaking processes. Recent work in cognitive science, neuroscience, and artificial intelligence suggests that a important function of cognition is the ability to load-balance the mental resources needed to do so:

After growing up together, and mostly growing apart in the second half of the 20th century, the fields of artificial intelligence (AI), cognitive science, and neuroscience are reconverging on a shared view of the computational foundations of intelligence that promotes valuable cross-disciplinary exchanges on questions, methods, and results. We chart advances over the past several decades that address challenges of perception and action under uncertainty through the lens of computation. Advances include the development of representations and inferential procedures for large-scale probabilistic inference and machinery for enabling reflection and decisions about tradeoffs in effort, precision, and timeliness of computations. These tools are deployed toward the goal of computational rationality: identifying decisions with highest expected utility, while taking into consideration the costs of computation in complex real-world problems in which most relevant calculations can only be approximated.

Readings of various takes in the strategy literature about the ability to do so in the context of strategy differ along several lines. In one, we have the notion of the commander as an expert practitioner that has, over time, accumulated a series of “chunks” of related components of knowledge from experience. An expert chess player can recall an enormous amout of low-level details of the game given a stimulus. It is plausible that a theoretical ontology helps organize such knowledge to be efficiently learned and processed, hence strategic theory’ emphasis on the role of theory and structured study to guide the military practitioner. On the other hand, the notion of military genius and intuition could also plausibly be read in terms of affordances defined on objects in the environment. An affordance is a relation between a feature of the environment and the set of possible actions it affords; a commander understands that the affordances inherent in the planning of a strategic bombing campaign are distinct from the affordances inherent in designing a naval offensive because the relevant features of the environment in both instances differ dramatically. This notion has been used to explain the expert playing of another class of strategy game.

Finally, John Boyd’s Observe-Orient-Decide-Act Loop defines a different kind of relationship to the environment — one in which an agent or system receives sensory input and feedback from the environment that is filtered through a epistemological model or frame. Though Boyd has always noted that the OODA Loop as a theory of competitive behavior theoretically allows for changing that model due to unacceptable disparities between the model predictions and reality, much of individual and organizational behavior can still be described as homeostatic in nature. Individuals and organizations act to satisfy needs set at a fixed reference point — when the entity’s internal state differs from the desired state the entity performs an operation to set it back at the desired state. This adds another wrinkle to the previously issues with describing action selection — basic needs and drives can clash with purposeful goals.

Finally, a bridge between the two lies in the notion of explaining the “inner nature” of an object functionally by arguing that the characteristics of the object map to the conditions and requirements of the environment. This approach, common to behavioral and evolutionary theories, is interesting because it suggests that to model strategy one first specify in great detail the environment’s “logic” and “grammar” — Clausewitz’s On War specifies a core logic of conflict but allows for the specific expression of that logic to vary by environment and context. This is in part where computational modeling as a solution becomes highly risky. Eliot Cohen rightly noted the problems with Stephen Biddle’s Military Power (which despite manifold flaws was still a rare, successful, mixed-methods contribution to the strategic studies field )— the mathematical and computational simulations, qualitative case studies, and statistical data sources for it were all drawn from one highly particular mode of conflict.

But this merely begs the question of what a representative calibration and validation dataset would look like in the first place, given the enormous distinctions between the “grammar” of war across time periods and conflicts, a statistical problem that has marred attempts to analyze long-run trends in warfare by problematically assuming that the units and dynamics in each row of the dataset can be analyzed similarily. Hell, sometimes my military historian friends point out that categories such as “partisan war” and “people’s war” — which differ in their combat mechanisms and dependence on nationalism and popular involvement — can both be seen in the same sub-front of the same conflict concurrently or in parallel. Notions like Stathis Kalyvas’ “ontology of political violence” presume as much.

Additionally, war and peace are stochastic processes; we may never really be able to scientifically pinpoint or explain why they happen or do not happen. In particular, recent quantitative work has provided support for Clausewitz’s notion that the expression of any one conflict is a function of the potential for interactions between a community’s capacity for instrumenting war, mobilizing the relevant community to fight and die, and the play of chance on the battlefield. If we were to plot a community’s ability to do this statistically, the results likely would not be normally distributed.

Yet averaging and abstracting all of this is key to mean-field assumptions in social science; mean-field theory holds that we can use a simpler model to approximate the behavior of a larger and much more random and complex system of interest. Mean-field assumptions break down quite easily; I have seen this dynamic demonstrated in situations when agent-based simulations are compared to simpler analytical models:

This simple example shows not only how useful ABM is when dealing with inhomogeneous populations and interaction networks but also how to go from a differential equation model to an agent-based model — usually it is the opposite transformation that is used, where the differential equation model is the analytically tractable (but deceivingly so) mean-field version of the agent-based model. What is useful about this ‘‘reverse’’ transformation is that it clearly shows that an agent-based model is increasingly necessary as the degree of inhomogeneity increases in the modeled system.

All of this finally brings us to the core issue inherent in computational modeling of strategy: how to represent the system of interest that the strategist is embedded within. This is, in some ways, is an engineering problem. If we take, for example, that affordances on strategic objects is an important thing to simulate we must build in a mechanism for the simulated commander to have such affordances. However, it is also an issue of science and philosophy as well; what kind of representation of the environment is sufficient. From where will the “data” come? What are the limitations? No one book or journal article will solve all of these problems, but it should at least advance the state of knowledge a little bit further.

Making a choice as how to structure the environment of the simulation, the levels of analysis of decision making, the agent’s relationship to the environment, and the way in which strategic behavior is represented as a whole is a task that computational modelers have mostly avoided. This is not unique to modeling as much as it is a feature of many approaches to the topic in general. Social science in general mostly concerns high-level issues such as whether or not states go to war or compellence and bargaining processes during war; while these may fruitfully guide strategic research they are also insufficient. Qualified exceptions may be found in the Civil Wars literature and here and there in debates on military doctrine, nuclear and conventional deterrence, etc. The problem is that much of this sits at too high a level of abstraction to be useful to strategic researchers and civilian and military practitioners.

Obviously I can’t solve all of these problems in the course of a single doctoral program. Nor will I, quite frankly, solve even a bare minimum portion of them in my lifetime. However, I do have some basic ideas.

First, I think that it is important to think about research goals for computational theory development in strategy right now through a primarily methodological lens. Unless you plan to simply use simulation to reconstruct historical conflicts in detail, any computational research project will have to do something to join together the enormous mass of strategic studies research and theory that lacks code/equations or even social scientific research norms period with the host of social science and computer science research methodologies that have built up over the last few decades. In the 1980s and early 1990s, production systems and symbolic artificial intelligence were used in a very interesting manner by social scientists, cognitive scientists, and computer scientists to study strategy and decisionmaking. Very little of that research has been revisited, despite its heuristic value methodologically.

Moreover, in picking a canvas to use for computational modeling, realism initially may not be important. Clausewitz himself patched together his theory of war by relating his qualitative insights from his own experiences and the study of military history to abstract theories and metaphors from physics and probability. Hence, given that Clausewitz’s notion of war holds that conflict has simple rules but complex issues of uncertainty inherent in how actors make choices, games will likely be key tools of theory development. As argued in an upcoming piece by Kenneth Payne (which he graciously provided a offline copy of for me), if we programmed an intelligent system to execute strategy it likely could dimensionally reduce the complexity of theater strategy into a tabletop wargame-like representation.

Given that both realistic and recreational wargames are the closest link we have between the “games” of game theorists and military-strategic theorists, building models at the level of detail of a simplified tabletop game or computer wargame is probably the best way to start real computational modeling of strategy. As I will detail in future posts, there is a large historical literature that shows how conflict and strategy games — both recreational and professional — have driven key work in wargaming, computer-generated forces, cognitive systems and cognitive modeling, artificial intelligence, many branches of social science (game theory most obviously), and even many non-social disciplines in the so-called “natural” and “hard” sciences and engineering fields. Lacking an umbrella term for all of this, I will dub this giant class of literature “adversarial reasoning.”

Finally, this methodological work should be specified at the interaction level of individual agents. At most basic, Clausewitz said that war is a duel. At most basic, social scientific conceptions of strategy are simple zero-sum games. The divide between both fields is simply their differing takes on the complexity of what it means to “play” the game. As Clausewitz again noted, war is simple but the simplest things in war are difficult. Before computational modelers create strategic studies work that attempts to simulate the strategies of nations, they ought to work on a granular level to specify the complex strategic interaction between relatively simple systems. The computational dimension would move social science approaches to strategy away from formal representations of strategy (game theory) and empirical ones (process-tracing or regressions) to an experimental paradigm that attempts to build theory-based models and experiments with complex strategic agents.

For example, the real-time strategy nuclear simulation known as DEFCON is a rather simple game when compared to the complexities of real-world nuclear strategy. Yet in compressing that complexity into a semi-realistic representation (complete with naval, air, and missile forces) that we know people play in the real world does us several things. We can program complete agents to play the game and see what happens, and we can use real-world data from recreational play to do so. A far more realistic game, the combined arms warfare title Arma3, has been utilized in real-world military simulations. Even highly fantastical representations of strategy have been utilized in a military context as sandboxes for data farming; run the simulation a bunch of times and generate large amounts of data that can be sifted through. And, most importantly, this research method grounds the work in a defined formal game structure, allowing for a way to formalize intuition about complex strategic behaviors through code while also retaining somewhat of a connection to social science representations of strategy through the shared notion of a “game.” However, even very recreational approximations of strategy are not so completely disconnected from the psychological dynamics described in the last section — notions of schema, affordance, and chunking have been used to explain the playing of commercial strategy games.

Is this method a panacea or even a partially sufficient solution? No. But strategic studies is a rather demanding discipline that is inherently suspicious of parsimony and abstraction. In order to deal with those expectations as well as social science’s roots in generalizable causal mechanisms and computational modeling’s elements of mathematics, algorithms, and simulation systems, wargames and other microworlds that bridge such gaps are probably the best place to bring the “computational” to the “strategic” in strategic studies. And by focusing on the issue of action selection, researchers can develop formalisms, theories, and approaches that may be utilized elsewhere. Every journey, after all, begins with small steps.

If a lot of this seems overwhelming, I don’t blame you. I have spent my time — from the time I began my new PhD program in Computational Social Science in fall 2013 up to present — absorbing an enormous amount of literature in everything from the simulation theory of mind to robotics and real-time control systems while somehow also spending late nights trying to learn elements of software engineering and programming, computer modeling, experimental design, my substantive coursework in my PhD program, and enough mathematics to make a former qualitative-historical researcher’s eyes bleed. I also have done this while writing, blogging, and issuing long tweetstorms continuously to keep my brain alive and engaged in the abstract task I am pursuing — it can’t all just stay in my head. And somehow convincing my longtime girlfriend to put up with me long enough to become my wife this year. I’m not saying all of this to brag, it has been one of the most difficult and trying experiences of my life. I made it through out of dogged persistence, and I still have a good deal more time to go given that I very much started over when I transferred from a IR/poli-sci PhD program to one in computational social science.

I haven’t had much time to work on the research design element, partially due to all of this as well as the fact that I’ve struggled for a while to understand exactly how a strategic studies researcher can do original research with computational social science methods. I have, though, tinkered and experimented for a good deal of the last year starting in late summer 2014. There were many false starts and dead ends, but that’s a part of being a researcher. I am thankful for those who put up with me while I went down the rabbit hole on multiple occasions. You stumble a bunch of times until you find the right way to do it, or at least a way that makes enough sense to start wading into the work even if you don’t completely know how you’ll do it yet.

For me, stumbling on Joanna Bryson’s work on artificial models of natural intelligence (which I have linked to and quoted heavily here) finally helped me put a lot of this together — the notion of action selection proved to be critical given that Bryson and her students use their agent architectures on both robots/complex AI agents and my PhD program’s in-houses social modeling engine, MASON. Another key breakthrough was making contact with Kenneth Payne, who has a book due sometime next year on artificial intelligence and strategy. Payne’s notion of wargames as a dimensional reduction has proved extraordinarily useful in moving from just tinkering and basic experiments to beginning to use theory to craft experimental simulation designs and program agent architectures.

Having conducted a lot of experiments, tests, and “what-if” ideas in an ad hoc manner while my ideas of how to organize them constantly shifted, I also have gotten some good experience as to the engineering issues inherent in doing a strategy simulation that at least somewhat fits the theoretical and substantive knowledge I came in with when I began my PhD in Computational Social Science. So, looking at the problem with new eyes and a much more stable conception of what kinds of contributions and problems I am interested in, I’m pretty excited. It’s a cliche to say that the hard part begins now, I’ve already said it several times and its not as if what I was doing prior to this was easy, but the hard part really does begin now.

Strategies of the Artificial

Strategies of the Artificial is a blog on computer simulation, artificial agents, and technology. A previous incarnation of this blog lived at http://aelkus.github.io. Header is a screenshot of an agent-based simulation in NetLogo.

Adam Elkus

Written by

PhD student in Computational Social Science. Fellow at New America Foundation (all content my own). Strategy, simulation, agents. Aspiring cyborg scientist.

Strategies of the Artificial

Strategies of the Artificial is a blog on computer simulation, artificial agents, and technology. A previous incarnation of this blog lived at http://aelkus.github.io. Header is a screenshot of an agent-based simulation in NetLogo.