A Compressed Representation of Computational Strategy and its Problems
Now, I compress all of the former entries into this more basic summary of the problem I am tackling and the theoretical and methodological issues involved. If you want to see the entries that led up to this on a timeline, this is the last entry before this one and this is the first entry. Depending on how you like to read, you can do forward or backward chaining to see how I arrived at this compressed representation.
I: Computational Study of Strategy, Tactics, and Adversarial Behavior Writ Large
Adversarial reasoning/problem-solving is an important aspect of our lives. All of us face strategic and tactical problems that force us to figure out to find a way of linking the desired aims we seek to pursue with actions designed to bring them about, dependent on an adversary that seeks to thwart us. This process occurs on both individual and collective levels of organization, from the simplest of creatures such as ants and flies to the most advanced of human-connected entities such as armies and states. There are several ways that we may use the computer to investigate such a process via computational modeling. Alan Turing argued that a machine may lack the exact biological substrate that produces intelligent behavior, but the process of production may be reduced to a series of mechanical operations. Today, the machine functions as a way to augment the intelligent approach of the computational modeler, but how it does so has several important variants depending on the computational modeler’s approach. Here, I take computational modeling to be the investigation — using a computer — of a environment of interest, a task, and artifacts that utilize both to achieve their goals through decision processes. I use the language of “simulacra” and “simulation” as loosely taken from the book on the subject.
The first approach is simulacra. A chess program is a copy of how humans behave in the chess task environment even if it still utilizes processes (such as the adversarial search formalism) that are at best a crude approximation of complex human reasoning processes. The second approach utilized by the modeler lies in simulation. We create a mechanical imitation of the environment and the task and the entity within it. For example, we chose to abstract how humans behave in political decision making with a synthetic microworld we have created a copy without an original, as making a program that makes decisions in a model of the Cuban Missile Crisis is not the same thing as giving a program the ability to make decisions during a full-scale replica of the Cuban Missile Crisis. The latter is experimenting with a synthetic artifact in a real environment and the former is truly creating a “copy without an original” by constructing an program that adapts to a synthetic environment. I argue that there are sound reasons for doing both.
Why use a real world environment, even if it may seem trivial? Carl von Clausewitz argued that conflict may be regarded as a game with simple rules but complex probabilistic calculations inherent in the choice of actions under uncertainty, a quality observed in recreational games such as poker and other games of imperfect information. Herbert Simon has argued as well that all complex social action in both individuals and organizations is underpinned by limited human information-processing in problem-solving; Simon studied chess because it illustrated the principles of “bounded” or “procedural” rationality in that humans and machines cannot reason about the entire state space and must use heuristics and efficient knowledge representation to act. There is also some evidence that both military wargaming and the choice-theoretic school of social science evolved out of mathematical studies of chess and other recreational games of strategy and conflict. Hence games might be regarded as a way to understand basic mechanisms that may be a part of the larger whole when it comes to the complex processes inherent in complex adversarial decision making. These games may have varying degrees of realism (they can range from recreational conflict games such as chess or the simplified military strategic game of Starcraft to high-level wargames used to train decision makers) but ought to be regarded simply as socially constructed formal structures (consisting of goals, rules, allowable actions, and scoring criteria) that test human skill and reasoning in various ways. By constructing agents that can perform in such domains, we at least have the advantage of showing — via existence proof — that the mechanism of performance we use to replicate human behavior could potentially explain human behavior in the domain of interest. Granted, the choice of domain can will be disputed (see the objections to chess in artificial intelligence and robotics) but no one domain is perfectly representative of the complex thing that is human intelligent behavior.
If we accept the previous paragraph, why use a synthetic environment that is a simplification of the real (and thus a copy without an original)? The representational power of the computer is as such that we can abstract the essential components of a real world scenario and use them to create a closure over a defined set of criteria seen in the real world games examined in the previous paragraph (a goal, rules, allowable actions, and scoring criteria). Why would we want to do this? Our knowledge of the world itself is not given to us purely naively; we do not “understand” the physics of how a bird flaps its wings solely from observing it fly. We create various conceptual and technological scaffolding to enable ourselves to better observe and understand the world; a social science model is simply a set of features of interest that we choose to utilize as an guide to the design of the intellectual and technical apparatus used to investigate the phenomena of interest. This suggests an interesting similarity between the modeler’s view of the problem. It may be interesting to note that this same problem occurs with the agents we are seeking to simulate. They are embedded in both social and natural worlds, the former constructed in part by their own behavior. Neither is understood in an unmediated fashion; instead it is understood through various representations and abstractions. By making a model, we are making an epistemic wager about how to understand the problem of interest and the decisions it produces, and in specifying agents we are merely making informed guesses about how they perceive the internal problem of the model and the manner in which they map perceptions to actions (or, if you are an old-school cognitivist, knowledge to symbols).
In both models, we begin with a set of empirical regularities or presumed regularities in a system of interest. For example, we might note that in real world studies of human decision making in wargames, the military-decision making process (MDMP) does not actually predict how tactical operators make military decisions. What kind of theory, then, would explain these empirical regularities? Additionally, we might also note that it is commonly thought (either in assumptions about Vladimir Putin as a kind of master game theorist or the literary presentation of both Sherlock Holmes and Professor Moriarty as geniuses able to simulate their own physical combat forward to ascertain the outcome) that human beings can simulate strategic interactions forward to be a certain amount of moves ahead from their opponents. Is this actually the case? If not, what is a better way of looking at it?
II: Method of Computational Theory Development
In other words, if decomposed into a set of steps:
- Observe a real or perceived set of regularities in an an environment of interest or a basic state of uncertainty about the regularities. The former can be empirical data or historical knowledge, the latter simply a verbal theory or informal understanding. The core thing is simply that we have a problem and would like to use the computer as a theory development tool; the computer itself becomes the theory and formalizes our verbal understanding of the system and the regularities. Alternatively, we may not simply know what kind of regularities will result from experiments — theoretical predictions only provide a basic starting point and we would like to see what results once we run the program.
- Create a theory as a linkage between task, environment, and artifact(s) that generates the regularity as behavior. If the environment and task are given but the artifact is not, we create theories of how the artifact(s_ behaves in the environment. If none are given we create a theory of how to abstract the environment and task and then how an artifact(s) of interest produces the regularity. Or, in the case that we don’t know what the regularity is, we make some basic guesses about what regularities are going to be produced to guide the exploratory study, generate data, and then move on to more complex experiments.
- Encode the linkage as a computer program, perform experiments on the program, and analyze the model. If the environment is given, we connect the artifact(s) to the task environment somehow and allow it to perform actions within the environment. If not, we simply run the simulation. Exploratory studies may generate data for harder and more precise experiments of interest. Depending on the task, the model analysis may precede in a variety of ways. If, for example, the task is to investigate the stability properties of a complex game we would be interested in how different parameter settings or algorithms for strategy selection create different outcomes. Alternatively, if we know in advance that an observed behavioral regime exists, the model analysis would stem from how well it accounts for observed data (beginning with carciature levels of validity at the lowest standard of accounting for empirical regularities).
The modeling process often iteratively cycles through steps 1–3, sometimes with steps occurring in parallel.