Roadblocks to Computational Modeling and Theory Development in Strategy II: Situating the Strategic Subject
As noted before, my intent in writing this blog series was to rapidly iterate through the various literature and questions I am exploring until I have a clear idea in my own mind of what I am trying to do. Much of my writing on theory and method has been bottled up in notebooks, .txt files, and other scattered media and often getting it out online can help me put it all together or at the very minimum find holes in it.
My goals in particular with this series have been to force myself to be as theoretically general as possible in the questions and theory being used as well as continuously drive myself to more and more specific and narrow applications of the theory. In the last piece, I finally was able to generate a research question that I thought made sense. However, as I looked at it today while dealing with an unexpected illness, I find some nagging problems with it. One thing in particular that the last piece prompted me to do was to examine some of the underlying philosophical assumptions I was using from the perspective of how I would have seen strategy prior to being indoctrinated with computational modeling thinking (“how would Old Me see it?”). That proved to be particularly fruitful.
My previous entries (follow the links back to prior blogs that begin here), focused a lot on both methodological and theoretical issues, especially a focus on theoretical and representational roadblocks to modeling strategy. Here, I reformulate my problem as a question of how to situate the strategic subject in a computational model and what kind of theoretical assumptions and questions to make about how this is done. There are two broad approaches to viewing strategic action in the social sciences (broadly construed). In the following two sections, I will cover two differing conceptions of strategy. One of them arises from the world that I formerly inhabited before I switched PhD programs. The other arises from the world of computational agent modeling that I now inhabit as a PhD student.
- A notion of strategy as sensemaking process. This brand of theory has resisted quantification and computational methods and focuses very much on examining how strategy is a process of making sense of complex and often highly uncertain (in multiple uses of the term) adversarial competition. Moreover, strategy has been extended by multiple fields outside the military and security studies, making the concept of strategy a “metaphoric network” comprising different overlapping conceptions of the basic framework in multiple domains.
- A very different emphasis on strategy has focused on modeling decision behavior in individuals and organizations as a process of action selection and computation. While the former conception of strategy places enormous value in framing and synthesizing multiple logics and contexts, the latter is rooted in the mechanical description and simulation of human and animal decision behavior. It relies on a overlapping set of disciplines concerned with information processing, heuristics and search, algorithmic and utility-theoretic behavior representations, and the use of games and microworlds for simulation.
After explaining this, I will explicate a revised assessment of the barriers to representing strategy from the perspective of the philosophy of computation and simulation and then outline how they point to two divergent classes of research question for me.
I: Strategy as Sensemaking and Metaphoric Network
All social struggles can be viewed as dynamical interactions between entities empowered with discursive and instrumental repetoires of contention. Hence one way to view strategy is as a process of sensemaking, with the way to study it being the use of deep context in the reconstruction of how strategy is constructed, executed, and evaluated. To understand why, it is important to look at the feelings of many pioneering social theorists about social reality itself (as seen in the writings of Wilhelm Dilthey):
Our knowledge of the social world is not a “copy”; it is an abstract representation. This observation seems to be analogous to the obvious point that a verbal description of an apple is not similar to the apple; rather it is a syntactic construction that attributes characteristics to the features of the apple. The next several sentences in the second paragraph seem to change the subject slightly; Dilthey distinguishes between “copying or representing” and “interpreting and locating in terms of a meaning system.” This point is understandable in terms of the hermeneutic method: discover the meaningful relationships among elements of the text (or ensemble of actions). The “re-feeling and re-construing” seems to be an expression of the method of verstehen: to reconstruct the meaning of an action by placing oneself as fully in the position of the actor as possible. And the final two sentences seem to suggest a refinement of knowledge through the discovery of finer detail in the interconnections among events and their connections to a system of meaning in the world of lived experience.
Clifford Geertz famously defined the task of the social sciences as a search for meaning, to examine the complex network of symbols that characterize a social and cultural system. Geertz’s famous essay on the Balinese Cockfight is a case in point — an apparently simple game of recreation and gambling is revealed to be a latent expression of how villagers interpret power dynamics in the village. In James C. Scott’s Seeing Like a State, Scott makes a similar argument to explain why scientific development and social engineering schemes have failed. “High modernist” movements seek to transform the distributed and often highly tacit and particular knowledge that characterizes a social system into a form that is “legible” and thus amenable to quantitative analysis and process control. As Venkatesh Rao observed, “legibility” may be viewed as a synonym for “machine-readable.” In order to understand and explain human behavior in a social context, social scientists have to reconstruct and explore them.
This is, obviously, very different from how most people think about “science” but my old professor Patrick Jackson made a strong case that it can be grounded in debates in the philosophy of science. Sometimes knowledge cannot be extracted without paying a steep representational price; and before we can generate robust and parsimonious theories and predictions we need to take a deep dive first even if the first results are messy. So much of individual and social behavior is opaque and murky to us, especially in areas (such as strategy) with data that often does not fit the questions that we want to ask. But back to Scott and Seeing Like A State.
What we really want to know about the explanation of behavior, Scott argues, is not “legible” from the perspective of a bureaucratic administrator that reads only statistical reports and other bureaucratic minutae, it must be teased out and described “thickly.” This debate is not really new. It is somewhat of a cliche to note this, but a variety of core debates in the sciences impinging on analysis of human behavior and society boil down to the issue of representation of what the key problems of explaining and replicating human behavior are. For Wittgenstein and a host of others, the problem was in the nature of explaining the very nature of explaining language:
Language did not have such a fixed, eternal relation to reality bound by logic. The process of “measuring” the truth of a statement against reality was neither objective nor cleanly delineated. The meaning of what we say can’t be abstracted away from the context in which we say it: “We are unable clearly to circumscribe the concepts we use; not because we don’t know their real definition, but because there is no real ‘definition’ to them,” Wittgenstein wrote. Instead, our speech acts are grounded in a set of social practices.
While strategy an of course in principle be explained by reference to commonly understood theories and knowledge about war, many have argued that it is also a matter of context as well. This, given how research in strategy and security studies often hews to stilted, folk rationalist assumptions about the balancing of “ends, ways, and means,” may seem difficult to believe. However, upon closer exmination we see a lot of emphasis on the notion of the strategist (or strategic entity) as engaging in a struggle to find a linkage between the low-level contexts of armed violence and desired political effects. While tactics may be governed by the “rules of the game” strategy is an attempt to change those rules and manipulate them to one’s advantage. Moreover, as it is “done as tactics” strategy itself is often simply little more than an abstract belief as to how to achieve a desired end. Strategy encompasses (though is not necessarily equivalent to) both a narrative of how to achieve desired ends in execution as well as to gain some degree of control over an adversary’s decision making and/or generate a competitive advantage.
Colin Gray, for example, has argued that strategists cannot afford to neglect culture, morality, and bureaucratic and political constraints. Eliot Cohen examined the “unequal dialogue” of civil-miltiary relations and how it shapes strategy. Lawrence Freedman has noted the importance of cultural and social understandings and stereotypes in how strategy is formulated. Alastair Johnston examined the role of culture and belief in the formation of Chinese strategy, and other work has sought to explain topics ranging from the construction of geopolitical interests to the way in which overarching organizational and political biases framed consideration of actions in a negative and counterproductive manner. The notion of strategy as a creative design process links it to the idea of the professional as a reflective practitioner that continuously re-evaluates and questions their own ways of knowing and is not content merely to rely on the technics and training of their background. John Boyd and others viewed strategy in such a manner; Boydian theory is not about an adaptive control loop as much as an epistemological framework for understanding how to develop a template for creative evolution over time.
This is why David Betz describes strategy (as well as the study of it) with the language of sense making, uncertainty, novelty, and creative thinking:
The job of the strategist is to make up on the fly, as it were, as best as he or she is able on the basis of incomplete information of a constantly evolving situation and given a certain range of resources, a route towards the achievement of a given policy. Ideally, the latter is clearly articulated and plausible, though by and large nowadays that has not been the case. War is inherently and irremediably a gamble.
Betz goes on to note the importance of strategy and security analysts being “magpies” that steal promiscuously from many disciplines but elevate history and political theory to a place of prominence as well as root themselves experientially in the knowledge necessary to understand and analyze strategic phenomena. Likewise, Clausewitz famously argued for the study of history as a kind of way of immersing oneself into the goals, frames, experiences, and choices of prior soldiers and statesmen. Frank Hoffman makes a similar argument rooted in creative design, coherent synthesis, and other ideas.
[T]here is little “art” in skill, which implies something trainable by anyone rather than acquired by dint of a rigorous education and the experiential practice required to master strategy which has far more art to it. I subscribe to Bernard Brodie’s construct of the art and science of strategy, but emphasize the art. So instead of “select” I would propose “design.” Second, I think “balance” is also soft. “Balance” is useful but not optimal and doesn’t connote the inherent logic of a strategic option. …..I prefer “coherently link” to ensure that the ways/means are the proper method relative to a desired outcome.The Ends/Ways/Means triptychcan be in balance and ultimately irrelevant to the problem at hand. But if they are coherently linked, they are tied to generating a solution to the problem that has been framed.
The key point that everyone makes by this is that context is necessary to understand and analyze strategy and its related phenomena. Clausewitz himself advocated a method of “critical analysis” of tracing effects back to their causes through the laborious reconstruction of the context of any given strategic or tactical context. As one can note, this is a predominately qualitative form of research based on careful case analysis, process-tracing, and the use of historical citation and archive-searching to demonstrate causal claims. It is rooted in the assumption that strategy cannot simply be deduced through revealed preference; often archival research can uncover surprising insights about the way in which key decisions were made about pre-WWI crises, the Cuban Missile Crisis, and the 1950s showdown in Berlin. The focus of the analysis may be a particular institution responsible for formulating a key stratagem or the evaluation of the totality of a strategic entity and its processes in “net assessment” form.
Finally, Christopher Paparone has argued that strategy has become a “metaphoric network” that encompasses the understandings, meanings, and processes of a number of other related disciplines. I copy his several diagrams to illustrate his argument:
Personally, this is increasingly my own view of what “strategy” is, a kind of broad “container” for different overlapping representations of an effectively similar activity:
From the Schönian perspective, the morphological process affecting the meaning of strategy seems to be anchored in the development of theories of action through multifaceted contextualizations and recontextualizations of how and what to do when faced with important, novel situations.[10] In plainer English, knowledge communities adapt the meaning of strategy as they reflect in and on the new situations they face and reconstitute its meaning. The emergent contexts warrant further displacement of the meaning of strategy and dynamically shared meaning among the disciplines (various contexts) and layers of new associated metaphors are themselves extended and displaced in the emergent metaphoric network. The displaced ideas of strategy that took on elaborated meanings in other fields are projected back and forth with military studies. Extended language constructions (e.g., the noun, strategy, becomes an adjective, “strategic”) emerge in the military and other communities of practice, such as: “strategic leaders;” “strategic vision;” “strategic end state;” and, “strategic planning.” These extended and displaced meanings are today found in the highest level conceptualizations of all US Defense Department war colleges and are elaborated to the point of serving as their raison d’être (Figure 3)
Just so you think that Paparone is not exaggerating, look at his mapping of strategy as conceptualized by the US defense community:
Notice any similarities? Today’s communities of strategic thinkers generally boast eclectic influences, from “competitive strategies” (borrowed from Michael Porter) to cutting-edge developments in “natural philosophy” derived from physics. Hence I’m perhaps most comfortable viewing the “strategy” I spent 2006–2011 learning about continuously as not one thing but rather a network of related concepts, each of which boasting a unique idea and logic of practice. In terms of how strategy is taught and discussed, I predict that Paparone’s conception (or something close to it) will eventually become the dominant one.
II: Games, Machines, and Ghosts in the [Strategic] Shell
Now, having explained the kind of strategy that I assume most of my audience is already familiar with, I look at a different conception that many in the former camp tend to be more or less antagonistic to. Consider the basic representation of the essential condition of strategy: making goal-oriented choices that depend on the actions of other parties. While in, say, strategic studies this takes the form of military violence and competition it also has been a perennial subject of study in the other sciences. For example, notions of a social contract can be explained as a process of repeated games that eventually yield societal convergence to a set of norms and expectations. Games have strategies for each agent that are dependent on what the other agent does.
Note that I said “game” and used a mathematical term — “convergence.” The ability to concisely and simply represent decision processes in terms of mathematical and algorithmic languages has its roots in the use of mathematics to formally describe war and games. Two influences have been present in this body of literature. The first is a broadly behavioralist conception of social science articulated by Milton Friedman and others. Theories should only have the smallest amount of detail necessary to explain and predict observed behavior. Second, a utility-theoretic view has been prominent in rational choice, game theory, and other similar disciplines. By creating abstract decision problems and assigning subjective utilities to choices, we can model how individuals and organizations make decisions.
Both viewpoints are certainly evident in the way that Cold War social science represented strategic problems. Nuclear war had never been practiced before, and soldiers and statesmen’s emphasis shifted from the prosecution of war to the prevention and control of war. Games and models were a way of understanding how interactions would play out over time by creating an abstract mathematical representation or simulation microworld and examining the stability properties of each scenario. While this has often been denounced as a perversion of strategy’s complexity, one may also note that Clausewitz himself toyed with such a notion in his language about probability and chance in war. Moreover, in contrast to the often reactionary romanticism of defense intellectuals, the cold and rational logic of game and choice theorists about the way in which power was contested was more compatible with politics and policy in a democratic state. While this may seem paradoxical, philosophers like James Burnham argued that by laying bare the formal logic of power and coercion, “Machivellians” created the conditions for compromise rather than Schmittian all-out ideological war.
However, this is not as important as what happened when the computer age coincided with the rise of such theories and models. Alan Turing may be regarded the true father of computational social science in that his notion of the “imitation game” argued that a machine could functionally replicate the human proces of thinking through a test of its capacity for mimicking human social interaction. Turing also worked on programming a computer to play the strategic game of chess. In turn, his colleague Claude Shannon eschewed “programming” per se and developed a method for how a machine could learn to play chess. This had been foreshadowed by the early 1900s prediction that chess could be mechanized, but it was one thing to mathematically theorize about and another thing to produce a working prototype. This development had seismic implications for social science. After all, game theory in the social sciences grew out of the mathematical study of strategy in chess and other zero-sum recreational games.
However, chess is a deterministic game of perfect information. War and other phenomena of interest to the social scientist arise from bluffing and other probabilistic dynamics. The canonical and cliched game theory problem — the Prisoner’s Dilemma — stems from how individuals will make a guess about someone else’s choice given a situation where they only have a vague idea of what their opponent is likely to choose. When we look at games — such as rock, paper, scissors — that pertain more narrowly to strategic choice in conflict situations the problem becomes more stochastic. In repeated two player zero sum games such as rock paper scissors where we assume that neither player is classically rational and must learn a good strategy over time, strategies may fail to converge to a Nash equilibrium and may even be chaotic in nature. Returning back to the Prisoner’s Dilemma, what if we assume there is a population of strategies interacting with each other in a kind of ecosystem? At this point we reach the limitations of analytical tractability and begin to use computers to simulate the process.
This leads to an interesting twist in the nature of how to characterize models. A strategy is, in the game theory sense, (crudely speaking) a algorithm for selecting actions based on what the opponent will do. In the Prisoner’s Dilemma computer tournament, the TIT-FOR-TAT strategy is programmed to reward cooperative behavior and punish cheating. What if we can view biological behavior more broadly from such a perspective? Evolutionary game theory and other similar disciplines came to generate long-run explanations for behavior in terms of programs that set out simple instructions for how an agent ought to act. The aformentioned rock, paper, scissors game can be viewed as a kind of evolutionary game as well. At an extreme, some of this literature can cast entities as little more than processors for internal programs; the notion of humans and other beings as encodings of instructions rather than content.
An obvious problem, however, is that computation capacity is limited in time and space requirements, and that organisms also must arbitrate between competing goals and behaviors. Turing’s initial work was followed by a later focus (by Simon and others) on problem-solving simply due to the inability to succeed purely through exhaustive search. Artificial intelligence has classically been regarded as a means of examining the requirements for producing intelligent behavior. There are broadly two perspectives on this: a cognitivist focus on how individuals represent and manipulate the external world through symbols and an emergent approach about how the individual itself is adapted to the environment through repeated interaction with it. Both have arguably been a strong part of social science research in various ways.
A large class of human behavior can be understood as rule-following in nature and dynamics. One of the many ways Graham Allison analyzed the Cuban Missile Crisis was from the perspective of so-called “standard operating procedures” — — rules, procedures, and programs that determined how different organizations solved problems. Simon developed a science of “procedural rationality” — how efficient heuristics and representations were used to prune the search space of possible actions — -in part from his work on RAND missile defense simulations. On the other hand, another viewpoint has always rested in the idea of control over behavior and adaptation to an environment. Norbert Wiener famously used this theory to develop automatic air defense systems, but it may also be seen in the explanation of NATO decisionmaking from a “cybernetic” perspective. Agent-based models of rebellion and other topics use simple agents with primarily reactive decision behaviors determined by the nature of the environment.
As noted elsewhere, what all of this has in common is an explanation of action from a mechanical perspective. Behavior is machine-like in the sense of rules, algorithms, and utility optimization, and the most important issue is one of action selection (“what will I do next?). Large state/action spaces, partial observability, stochastic outcomes of behaviors, inherent limitations on processing power, goals and behaviors that compete for resource allocations, often non-stationary environments and fitness landscapes are the biggest problems that characterize this domain.
III: The Barriers to Computational Strategy: Computational Theory and Modeling/Lady Lovelace’s Objection
How useful is the first strategy perspective (sense making and metaphorical network) vs. the other one (games,utility, machines, computational intelligence)? It depends very much on the research question. However, let us simply say that we are agnostic about whether or not to take up a computational representation (the idea that a computational theory, as opposed to just computational methods, can describe the object of interest) and simply just want to use computational methods to study strategy. After all, wargaming and simulation are time-honored tools in strategic research and practice, and sufficiently advanced computational agent models could serve the same function. So what would be involved in doing that? What we would do if we put our Wohlsetter or (post-Vietnam chastened) McNamara hats on?
A key criticism of computational models is that they do not tell us anything we do not already know. Because the modeler makes stylized assumptions beforehand in how the model and the submodels are programmed, computational models often are analogized to a video-game like representation of reality. That is not true, for the following reasons. First, formalizing a theory can yield data and sometimes counter-intuitive observable predictions and implications, which purely verbal theories cannot. Second, the state of the art in computational models relies on heterogeneous agents with bounded rationality and often examines problems with multiple equilibria. This is a source of interesting variability in outcomes at multiple levels of explanation. Because many computational models are stochastic in nature, qualitative system behaviors may be often roughly observed but no two simulations produce the exact same results.
Finally, the idea of computation itself as theory presents enormous possibilities for thinking about the world. The sciences are increasingly adopting a “computational lens” and computational, psychological, and social theories of decision making and rationality are converging. A “sciences of the artificial” that uses general-purpose computers to understand how biological and technical artifacts manifest intelligent behavior is still in its infancy, and as our understanding of computation and ability to exploit it increases questions that otherwise would be out of reach become exciting research possibilities. However, this incredible gift of computation does not come without a severe cost. This brings us into the challenge of representing the strategic context and its relevant actors, which turns out to be an enormous philosophical and practical problem. When questioning how to represent strategy computationally, a core problem that we run into is how to deal with the role of computation as both theory and means of scientific research.
To understand a bit more about this, consider the arguments made by philosophy of mind about symbol-processing and p-zombies. Searle famously argued that you could put a human being in a room with a set of instructions concerning how to interpret and respond to Chinese characters. This human could certainly give off the appearance of “knowing” Chinese, but in reality is just a meat-based computer with no intrinsic knowledge of the language. A far more radical argument is the idea of p-zombies, “philosophical zombies” that cannot be told apart from normal humans but nonetheless lack the basic sensation and feelings inherent in conscious human experience. If you prick them, they will bleed but they lack conceptions of what “pain” means.
Another criticism comes squarely out of social science. In Harry Collins’ work on artificial intelligence and society, Collins distinguishes between two types of actions. The first are “mimeographic” actions that do not generally vary with social context and the second are “poliographic” actions that are expected to be conditional on some expectation of social context. Machines, Collins argued, can mimic mimeographic actions (such as swinging a golf club) but lack the social embeddedness necessary to perform poliographic actions without the action of a human programmer to provide the context either explicitly (by programming or teaching the machine to perform) or implicitly (by embedding the machine in a particular social context where its own adaptive functioning is determined by social context).
Finally, algorithms themselves have key problems in simulating creative decision making in social contexts. I now summarize a long paper that a colleague showed to me on the representational problems of algorithms in agent based models. Agent-based models assign agents a single, homogeneous decision rule to describe individual decision behavior, but there are at least four different classifications of decision behavior to utilize in a simulation with little idea about how to choose between them. This poses a paradox: the strategy space that determines the nature of the behaviors to be explained changes as the situation evolves, but the agent itself’s decisionmaking remains fixed throughout the simulation period. A deterministic model cannot simulate creative decision making. In general there are inherent problems with Turing-complete systems in modeling creative decisionmaking in which an agent must be adaptive in choosing their strategy spaces and decision rules.
No matter which decision theory is used to describe decision rules and strategy spaces of agents in complex systems, assignment of decision rules and strategy spaces for agents poses significant challenges. The phenomena of preference reversal, irrational decision making, incomplete knowledge of what agents want and so forth demonstrate that there will always be a statistical probability that the algorithms that are trying to represent decision making by adaptive agents in complex systems will contain some error.
Second, there is a “framing” or “affordance” problem. In AI and philosophy, the frame problem has to deal with the inability of a programmer to anticipate every single scenario. This is dealt with practically by writing some rule that defines how an agent’s lower-level algorithmic rules would change given when a surprise or novel problem emerges. However, this leads to infinite regress as we find ourselves writing rules, meta-rules, and meta-meta rules and so forth. There is a persuasive argument to be made that until a cognitive agent can be made to derive a set of schemas and possibilities from the environment and relinquish the assumption of behavior as algorithmic, we will be stuck with this problem as simulations become more realistic.
On a more abstract note, another problem lies in the nature of using programming languages to simulate social context. A big question in social science and strategy is how order and coherence emerges from systems of self-interested agents. The authors in the paper I summarize are not optimistic about the use of programming languages to simulate this:
We have three propositions with respect to the intractability of language compatibility across human, programming and machine languages that are required for seamlessly operationalizing Turing-complete machines in a foundational programming language sense. First, there cannot be algorithmic, universal solutions to the problems formulated in a decision theoretical discourse rich enough to include meaningful discussion of values and long-term priorities of humans, i.e. well defined expected utility functions for each intelligent agent in the system (P1). Second, meta-decision theoretical discourse, which compares multiple decision theories such as empiricist versus causal, and descriptive versus prescriptive, is an open ended process with no fixed meta-rules (P2). Third, Meta-decisions cannot be formalized since these questions cannot be formalized or resolved by any empirical examination. Meta-theoretical decisions must therefore be understood as open-ended, pragmatic proposals, to be judged against social values of the (artificial) societies being modeled. We need, that is, a normative meta-meta-language to truly address, in a rational way, meta- theoretical decisions. Meta-meta-language thus represents process-based heuristics and not algorithmic deductions or inductions (P3).
This problem, generally speaking, has to do with the translation of human knowledge and discourse into high-level programming languages, and I would do a disservice to try to summarize it here. Read the whole thing. This is, finally, worsened by several other problems. First, existing models do not handle spatial and temporal discounting fairly well despite increasing recognition that this is key to understanding human and animal decisionmaking. Second, as noted in the preamble to all of this at the beginning of the section, there are enormous problems inherent in representing features of human novelty and creativity that stem from conscious experience (think about Gatsby and the “green light”, for example) using modern computational techniques.
All of this, however, just a more elaborate reconceptualization of Lady Lovelace’s objection to computing: “The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform.” In other words, can we really justify the statement that a simulation produces novel, interesting, and useful results that we did not previously bake in to the program? Certainly, the more complex a computer system and the more intricate its connection to the outside the world, the more interesting and surprising its behaviors may be. However, there is a distinction between being surprised by the novel consequences of an algorithm and a machine producing novelty. One can, for example, feed a computer an enormous database of chess strategies and tactics and give it an algorithm to use to generate a strategic plan, and the computer may find strategies that humans have never seen before. Depending on how opaque the computational process may be, one may or may not be able to reconstruct exactly how the machine produced the output. But whether or not a process is opaque or explainable is separate from the issue of whether or not it really satisfies the Lovelace Test.
Still, as Selmer Bringsjord recently observed, that may not matter at all. We can engineer cognitive agents to pass narrowly defined tests of realism, and that may be enough for our purposes. Alternatively, we can attempt to find ways of using learning and non-determinism to deal with some of the problems noted above. Or we could just reject the premises of the entire chain of thought I just enumerated in this section. While the biological system-computer analogy is bad it is not as if — for the purposes of modeling and research — there are too many other options to be had. By the same token, there may be no mystery to solve and all of the issues that were described above arise from parallel processes in biological systems that we simply just do not understand yet. Who knows? These are all, like the issues previously noted in the other “Roadblocks” post, too enormous for me to solve in my doctorate or my lifetime. I have every intention in leaving them up to other people to fully explore. I just want to bite off a small enough chunk of it as to be both managable research-wise but nonetheless stimulating and motivating enough for me to be excited about doing my experiments and results.
Conclusion: The Machine Question in Simulating Strategy
Looping back to the two categories of strategic theory I have observed above, I would say that at least for me (whatever problems you can derive from this are fine as long you cite me :) ) there are two interesting core research problems that stem from strategy and computation. I would hope to spend some serious time working on both of them throughout the rest of my doctorate using computational tools.
- How can computational theory and means be used to simulate the behavior of strategic agents? This is intentionally vague, but I again point to the paper that Kenneth Payne has under consideration about strategy and artificial intelligence. This is both a theoretical task (imagining the linkages between strategic theory and practice and computation) and a practical one (engineering models of processes thought to approximate core elements of strategy). Here, the challenges lies in finding a useful representation of the problems of strategy — keeping in mind everything that has been said about strategy as sensemaking and as a metaphoric network — and then using cutting-edge tools in computational cognitive and neural modeling as well as artificial intelligence and intelligent agents more broadly to simulate complex reasoning and behavior in this domain. Brinsgsjord et al have an interesting paper that tries to do this by creating a framework that aims to show how the simulation theory of mind may or may not be useful for explaining nuclear deterrence. Another paper of interest uses probabilistic programming to simulate theory of mind and nested reasoning in simple communication and decision scenarios with some strategic content. Finally, at my own university of GMU we have built a Clausewitzian AI that, using an ontology specified by the Army War College and a process of iterative interaction with Army officers, can simulate the US military’s doctrinal interpretation of Clausewitz’s Center of Gravity.
- How can computational theory and means be used to examine the action selection behavior of strategic agents in a representative game or microworld? Here, the focus would be more narrowly on simply action selection, computation, and decisionmaking. Payne has noted that, hypothetically, a strategic agent could reduce a theater of operation to an tabletop or digital strategy game representation. In the past, Herbert Simon and others argued that the game of chess could shed light on real world decisionmaking in complex scenarios. More recently, there have been some observed correspondences between problems of “complex agents” in both modern digital games and robotics environments. Both new and old games inform thinking and research about expertise, cognition, and skill in emergency scenarios as well as mathematical and computational modeling of “adversarial reasoning” and “adversarial problem-solving.” While such environments may only simulate a small portion of the overall picture, they make no pretenses to being correct representations of strategic realities and thus may be used to generate lower-level ideas (such as “bounded rationality”) that apply to strategic behavior in the real world but might not have been otherwise discovered if it were not for an purposefully unrealistic game/microworld environment. My own personal interest — for both reasons of theoretical interest as well as experimental convenience — is in using the real-time strategy of Starcraft as such a domain. In previous entries I have discussed my reasons for being interested in it and I don’t mean to rehash them.
As you can see, I am more or less trying to be honest about the “hard problem” of computational modeling of strategy and (despite all of this) somewhat humble in my own ideas of what I can try to achieve. Other disciplines can promise predictive results or at the very mininum formally prove their ideas. Computational modeling of a topic as complex as strategy can do neither at the moment. The best I can do, in the near term, is to try to work on better ways of matching theoretical expectations to methodological tools and examine what kinds of interesting theoretical ideas can be explored in fairly limited microworlds that have a basic approximation of things relevant to strategic practice. I hope you’ll all come along on the ride with me as I stumble my way into trying to do it.