Computational Modeling of Strategy and Tactics: A Question, Problem, and Set of Literatures

Adam Elkus
Strategies of the Artificial
22 min readSep 13, 2015

As I’ve noted, finding linkages between the things I acquired substantive knowledge of and interest in as a BA and MA student and the things that I now am acquiring substantive knowledge and interest in as a PhD student in Comptuational Social Science is hard. Hence, since spring 2013, I have relentlessly pushed myself through a conceptual exploration/exploitation process, exploring different ideas only to revise and go back when I found myself caught in local minima. I am beginning to have some success finally linking the literature and tools together this year, by increasing the pace of information consumption and output in research schemas.

After taking a look at what I wrote last weekend, I was able to clean it up and impose some conceptual unity. This is the latest result, and I feel much more optimistic about things for the first time, as I’ve managed to unify the disparate things I’ve been working on under a single and broad research question. It is, of course, too vague to be useful for creating anything right now but it is nonetheless miles ahead of my previous conceptualizations of my work, which lacked anything close to even a vague generalized research question.

I am broadly interested in adversarial situations where two or more agents attempt to gain an advantage over the other. I am to produce computational models of broadly strategic and tactical situations. Theories of adversarial behavior are useful to us because they say something both about how adversarial actions (tactics, strategies) are selected and what kinds of behavior regimes occur in adversarial situations due to the actions and learning behaviors of multiple agents. However, it nonetheless still difficult to make inferences about what outcomes will result from adversarial interactions.

The problem to be tackled is the linkage between action selection processes, a adversarial context, and a set of available strategies and/or tactics. While we know a lot about the specific strategies and tactics of particular domains, we know relatively less about the processes by which agents and organizations select and execute them. This has led to some wondering if the notion of “strategy” as we currently understand it is an “illusion” that we retroactively impute to suggest that purposeful and goal-driven behavior explains the outcomes of particular strategic and tactical situations of interest. Additionally, once we depart from simplified assumptions about overall behavior regimes in these interactions we enter a new and frightening world that sits somewhere between predictability and complete stochasticity.

I principally argue that computational modeling can contribute to shedding light on adversarial behaviors in tactical and strategic scenarios by focusing on the role of environmental determinism assumptions, conflicting decision factors, limited time and resources, and the issue of the unobservable opponent, and stability properties of competition.

  1. Environmental determinism assumptions. In agent-based models, agent behavior is often coded as a mapping of a particular state change to a particular situation. If this mapping is not easy, there is more than a limited number of salient situations the agent can find themselves in, and these situations are not mutually exclusive, building an agent decisionmaking process by enumerating the states that the agent can possibly be in and enumerating the causes for agents to change state may quickly become unwieldy. The programmer has to deal with coding state changes for both environmental and behavioral events, and if the environment is very dynamic and unpredictable there may be transitions from every behavioral state to every behavioral state. As Bryson notes, the number of state transitions may grow quadratically as “for every new action or capability added to an agent, as many transitions will need to be added to both it and to the other states as there are other capabilities.” Given that this is not a useful way to explain behavior and may be computationally expensive, computational approaches to modeling adversarial behavior should be capable of coping with a lack of environmental determinism if need be.
  2. Conflicting decision factors. Consider, as McFarland and Bosser do, the problem of a multi-task control problem in a simple animal. Even simple creatures do not just pursue one goal at a time but pursue a course of action that is optimal in relation to a large number of internal and external factors. Some goals and behaviors may conflict for resources or be mutually exclusive altogether. A theory for explaining how entities produce adversarial behaviors must explain how entities arbitrate such conflicts in an hostile environment characterized by environmental resistance (Clausewitzian friction), probabilistic calculations and outcomes (Clausewitz’s analogy of war to poker), and a hostile external agent.
  3. Limited time and resources. Consider, as Bryson and Brom do, the problem of how either a single agent or a distributed system produces intelligent behavior. The system, however it is represented, must produce the right behavior at the right time. While goal and behavior conflicts have previously been addressed, another issue lies simply in limited time and resources. A system cannot wait forever to deliberate. It has to act now. Additionally, a system does not have unlimited computational resources for searching for actions or learning how to update its own behavior. It must somehow bias search in a way that allows it to find good enough solutions and improve its own behavior.
  4. An unobservable opponent. Both conflicting decision factors and limited time and resources do not lie solely in domains of adversarial behavior. However, strategy and tactics present a special challenge. Consider, as Thagard does, the basic problem of adversarial action. An agent must select an action based on the environment and what it believes the opponent will do. However, what the agent believes the opponent will do is predicated on a belief about what the opponent believes the agent will do. An agent cannot directly observe the thought processes of its own opponent, it can only make inferences. It still must find an efficient and effective way to produce actions nonetheless.
  5. Stability properties of action selection and learning. A standard critique used to support agent-based modeling is that it is a tool for modeling “out of equilibrium” scenarios in which multiple equilibria exist that cannot easily be resolved between and agents make choices based on what an outcome that their choices will change will be. In other words, “agents’ actions, strategies, or expectations might react to—might endogenously change with—the patterns they create.” One does not need to reach this far to observe dynamics and contexts — such as cognitive hierarchies and complicated games — where stability and concordence with equilibrium predictions are uneven. Certain assumptions about the way in which agents select actions can produce differing behavioral regimes of the overall game, but the converse may also be true. Certain game behavioral regimes may shed light on how agents are adaptive to the conditions of the regimes.

All of these are a mouthful, so to simplify and broadly state what the actual problem to be solved is, I do the following. The research question thus, can be described broadly: given a context of adversarial interaction and some available actions, how can we explain action selection and learning from the perspective of how a entity of interest copes with a set of behavioral and environmental limitations? The academic outputs of interest are:

  1. System decision processes: for the relevant decision entities, what processes produce “good enough” behavior given various limitations — and of course does “good enough” mean under such limitations? Perhaps “good enough” in certain games may be simply accepting the reality of an Red Queen effect.
  2. Game behavioral regimes: given a stylized scenario, do differing behavioral assumptions about decisionmaking entities produce interesting differences in what overall behavioral patterns result in the game itself? As noted before, the study could easily be designed from the opposite causal perspective as well.

In the next few sections I describe the problem of interest in detail as well as relevant literature that could be used as an input to generating experiments and theory development.

Problem Description

The historian Freedman recently observed that a host of problems — from evolution to warfare — involve the art of using tactics and strategies to create power. In many domains of social life that Thagard dubs “adversarial problem-solving,” success means anticipating, understanding, and counteracting the actions of an adversary. Military strategy, business, and game playing all require an agent to construct a model of the opponent that includes the opponent’s model of the agent. Thagard argues that many sciences that study these disciplines systematically underestimate or underspecify the mechanisms agents use in problem-solving processes. Many disciplines that impinge on such domains neglect the cognitive mechanisms to explain how agents perform this task. Why?

The challenge of doing so is difficult and necessitates the integration of differing perspectives and techniques. As Latek notes that the vary nature of such interactions feature temporal uncertainty (actions are asynchronous, durative, and characterized by planning and re-planning) and strategic uncertainty (extrapolating from history is dangerous, and agents have an enormous amount of possible options). Competitive games may also feature a “cognitive hierarchy” in which equlibrium theory predicts behavior well in some games and poorly in others. Agents may make decisions based on what they will believe other agents will do, but may overestimate or underestimate the degree to how many iterated steps of thinking their opponents use in their decision rules. As Bryson observes, when it comes to social and biological agent-based models modelers usually make the assumption that a unique state change can be mapped to a member of a finite set of discrete, mutually exclusive situations. This tends to downplay the problem of action selection in the real world, which often does not admit such assumptions.

Furthermore, zero-sum or competitive repeated/dynamic games with a large number of moves and possible payoffs and/or more realistic assumptions about agent behavior than typical models tend to exhibit complex and chaotic behaviors. These dynamics can arise from situations in which more players play than in typical models and/or in which each player’s strategy space is larger. Further complicating the mix is the case when players must learn their strategies over repeated interactions. All of this may cause an explosion in the amount of possible equilibria. A core question lies in whether players can, in fact, converge to fixed points or whether strategies of both agents will follow limit cycles or chaotic attractors.

Explaining adversarial behavior in military, security, intelligence, deception, wargaming, and other similar contexts arguably most poses a core problem observed elsewhere in the analysis of contentious social behavior. As Gartzke observes, “war is typically the consequence of variables that are unobservable ex ante, both to us as researchers and to the participants.” We cannot predict in individual cases whether states will go to war merely from observable factors because of the role of some element of randomness in explaining war. Theories of why states go to war give necessary conditions for war, but the role of uncertainty about key, unobservable features of aspects of war causation such as bluffing suggests that uncertainty is the key element of explaining conflict. This is, however, not a new argument. Clausewitz analogizes conflict to a game with simple rules but complex probabilistic calculations governing each player’s decisions, a formalism that game theorists would later borrow.

A large class of situations in human life are “adversarial” in nature. While not all adversarial situations involve violence or coercion, many are characterized by a struggle between which two or more entities seek to obtain an advantage or some sense of control over each other. Gartzke correctly notes that adversarial situations are characterized by unobservable and probabilistic elements. For example, in making a decision in the game of poker, players search for behavioral “tells” that clue them in to what move an opponent will make. However, that is not all of the factors that figure into adversarial behavior. Payne observes that cognitive and affective factors may exert an enormous impact on strategic behavior. Decades ago, Allison debated whether organizational behavior could be explained in light of such notions such as “standard operating procedures” or bureaucratic politics.

Smith observes the following: “[t]he essential feature of strategy, as Colin Gray describes, is that it functions as the ‘bridge’ between tactics — actions on the ground — and the broader political effects they are intended to produce. For this coherently parsimonious reason strategy, in both its operational and academic manifestations, concentrates on practices as physically revealed phenomena. Strategy is, thereby, revealed in clearly observable facts and things, most notably in its association with actions in war. In this regard, strategy, in its application, and in its study, is about palpable acts and outcomes: armed clashes, organized violence, plans, battles, campaigns, victories and defeats.” However, as Smith observes, all of these observable outcomes presuppose some latent, unobservable set of processes that produce them. Smith wonders about the existence of an entire internal universe within the mind of the strategist, and one can obviously extend this to organizations that collectively produce strategy.

The fact that this behavior is difficult to observe has several consequences.

  1. Lack of plausible explanations for how strategy and tactics are produced. Betts observes that “because strategy is necessary, however, does not mean that it is possible.” The complexity of strategy involves devising a scheme to achieve an objective through either action or the threat of it, implement the scheme, keep the plan working in the face of opponent reactions, and achieve something close to the objective. Strategies are also chains of relationships among means and ends that span multiple levels of analysis. Betts outlines several plausible critiques of strategic behavior: virtually any choice can be justified before it is tried and hindsight cannot be used to select model strategies because history shows little correspondence between plans and outcomes. Additionally, integrating ends and means runs logically into psychological barriers, organizational processes and pathologies, and political complications. Betts observes that “strategy is not always an illusion, but it often is.” At most, we can say, as Freedman and Watts and Krepnivech do, that whether or not strategy is impossible from some optimal point of view is irrelevant. Strategies rely on attributions about other agents that are crude and likely wrong, but if forced to choose, producing an action behavior based on an imperfect or flawed strategic or tactical process is better than not producing an action behavior at all.
  2. Lack of plausible causal explanations for tactical and strategic elements of research. Watts criticizes the discipline of security studies for gross “problems of theory and evidence” when it comes to evaluating the record of coercion. Epstein faults security studies for being unscientific in its evaluation of conventional strategic balances. Both criticisms focus on a similar issue: the denial of conflict as a dynamic process in which outcomes cannot be characterized by linear formulas or the style of bean-counting analysis that Clausewitz dismissively refers to as “war by algebra.” However, these are but symptoms of a deeper problem. As Paparone has observed, the world is full of intractable situations and fraught with ambiguity. Knowing how the story ended post-hoc, institutions can attribute causal relationships that reinforce beliefs about how desired ends can be achieved through purposeful action. But these explanations have all of the scientific validity of Freudian explanation of psychology through patient introspection. Historians still debate about the connection between particular strategies and outcomes precisely because it is difficult to infer the connection between strategic behavior and strategic outcomes. Otherwise, analysts retroactively impute a solidity of reasoning and coherence of vision when little if any may actually exist, and expect that agents will produce such solidity and coherence in the future when it may very well not be the case.

Producing plausible explanations of complex system behaviors that involve semi-observable to unobservable factors has been tackled in several sciences by the construction of models. We do not yet know the full explanation for how human minds work and how cognition works in general, but the ACT-R cognitive architecture combines multiple components and has matched human data. The SOAR cognitive architecture may not necessarily match observed data but can perform complex actions in differing environments of interest. Building artificial ethology robotic models of animal behavioral systems allows roboticists to test plausible explanations for how animals may survive and thrive in the environment. Artificial life models attempt this, more controversially, through simulation. Finally, agent-based models often provide an existence proof of how localized agent actions produce complex outcomes. Several disciplines, which are surveyed in the literature review sections, provide various theories, models, and answers to these questions that may be utilized in building adversarial models.

Relevant Literature: Games and Agents

Many disciplines — — from the life and social sciences to the computational and engineering sciences — deal with some form of strategic interaction. Evolutionary game theory models evolutionary selection as a game-theoretic process. Multi-agent systems examines how software agents and robots cooperate and collude. The political and social sciences use strategic theorizing to study deterrence, the evolution of norms, and other subjects. In particular, classical game theory focuses on the question — what are the equilibria of a given model of the game? Behavioral game theory focuses on the question — given a game model, how will real agents play when presented with it. There are other questions, however, roughly called the “problems of play” at a more general level that may encompass different assumptions and interests from varying fields. The broad class of problems of strategic interaction that computational modeling studies, as the book Agents, Games, and Evolution notes, can really be boiled down to two types of situations and research questions.

  1. Given a context of strategic interaction and a collection of strategies, what will happen? How can we evaluate the strategies an agent might use? By what principles can we rationally settle upon a specific strategy? How can we discover new strategies, ones we are not aware of? How, in general, can agents learn to play more effectively? Research topics here include modeling agent strategy selection and strategic learning. In general, how can agents find good strategies of play? The method is to model the strategic situation, build a consideration set of strategies, and use tournaments to find robust strategies.
  2. Given a society or system of interacting players — a sociocultural environment such as a market or organization governed by certain rules and constraints — what will happen? How can we manage its performance? Will it be stable or not? Will it be fair? Efficient? How can we predict and control what happens in a context of strategic interaction? Research topics include the efficiency of overall outcomes, how cooperation or collusion emerges (or doesn’t), and optimization of system behavior when multiple objectives exist. The general goal of this research is learn how to predict and control what happens in strategic interactions.

Areas of further research relevant to the study of adversarial interaction are agent-based models and computational cognitive agent-based models as well as the overall field of adversarial problem-solving/adversarial reasoning. Sun notes that computational modeling relies on creating a set of explicit assumptions which are then tested, used to produce data, that can be analyzed inductively for coming up with generalizations. Agent-based modeling focuses on how interaction among autonomous agents generates complex patterns. As noted in several agent-based modeling textbooks, we can model at multiple scales of interaction and social entities. But why not go even further?

As Sun notes, agent-based models focus on how macroscale behaviors emerge from micro-scale interaction. Cognitive modeling with agents examines linkages between inter-agent/collective processes, processes that enable explain agent psychology, componential processes within agents, and physiological substrates beneath componential processes. In short, we consider the agent’s motivation, decision generation process, and environmental embodiedness and situatedness as an interactive triad that influences both agent behavior and system behavior over time.

Every agent has underlying needs, desires, and motivations that it must satisfy in the environment. Both reactive and deliberative behavior is a bridge between the agent’s needs/motivations and the environment, physical or social, in which the agent finds itself. The relevant entity being simulated is dealing with the environment and its regularities and structure, as well as exploiting such structures on an individual or collective basis. However, the agent may also be shaped by the physical and social environment. It’s needs and motivations may be indirectly shaped by the environment, its thinking may be structured and constrained by the environment, the environment’s structures and regularities may be internalized by the agent to facilitate the attainment of needs, and the environment itself may be utilized as part of the thinking/cognition of an agent. Needs and need attainment processes (cognition) lead to actions which change the physical and social environment in various ways. The changed structures may, thus, affect thinking/decision behavior and motivation.

In recent years, adversarial reasoning has become an interdisciplinary area that combines insights from multiple disciplines but couches it a vaguely game theoretic and agent-based formalism. Kott and Ownby define the field of adversarial reasoning as computational approaches to determining the states, intents, and actions of one’s adversary in an environment where the agent strives to actively counter agent action. Whether in the utilization of game theory for scheduling law enforcement and counter-terrorism patrols in police units or in modeling cybersecurity challenges, adversarial reasoning applies decision mechanisms seen elsewhere in social science to a very diverse yet nonetheless formally narrow set of problems. While these problems may be modeled game theoretically, it combines insights from game theory, cognitive modeling, artificial intelligence, robotics, and control theory:

The subtopics within this subject include belief and intent recognition, opponent’s strategy prediction, plan recognition, deception discovery, deception planning, and strategy generation. From the engineering perspective, the applications of adversarial reasoning cover a broad range of practical problems: military planning and command, military and foreign intelligence, anti-terrorism and domestic security, law enforcement, information security, recreational strategy games, simulation and training systems, applied robotics, etc. To make the term adversarial reasoning more concrete, consider the domain where it has been applied particularly extensively, the domain of military operations. In military command and control, the challenge of automating the reasoning about the intents, plans and actions of the adversary would involve the development of computational means to reason about the future enemy actions in a way that combines: the enemy’s intelligent plans to achieve his objectives by effective use of his strengths and opportunities; the enemy’s perception of friendly strengths, weaknesses and intents; the enemy’s tactics, doctrine, training, moral, cultural and other biases and preferences; the impact of terrain, environment (including noncombatant population), weather, time and space available; the influence of personnel attrition, ammunition and other consumable supplies, logistics, communications, sensors and other elements of a military operation; and the complex interplay and mutual dependency of friendly and enemy actions, reactions and counteractions that unfold during the execution of the operation. Adversarial reasoning is the process of making inferences over the totality of the above factors.

Relevant Literature: Intelligent Systems

Multiple disciplines exist that have little in common beyond their membership in the sciences of the artificial. Simon argued in his book The Sciences of the Artificial that the sciences of the artificial are the means by which a host of disciplines study how artifacts’ inner nature are adapted functionally to their outer environments given a goal. In this book as well as his Nobel Prize lecture and his Turing Award lecture with Newell, Simon argued for a interdisciplinary focus on studying the requirements for intelligent action in both biological and artificial systems.

Vernon and Sandini argue that a focus on building models of intelligent behavior entails research based on the presumption that a cognitive system exhibits adaptive, anticipatory, and goal-driven behavior. Cognition implies an ability to understand how things might possibly be, not just now but at some future time, and to consider this when it determining how to act. Cognitive systems exhibit effective behavior through perception, action, deliberation, and communication; but most importantly through individual or social interaction with the environment. It is resilient in the face of the unexpected and has some degree of plasticity. Hence, cognition may be viewed as the process by which the system achieves robust adaptive, anticipatory, and autonomous behavior.

In general, Bryson and Brom note that action selection for intelligent systems is a key topic across the sciences, where (as noted in another Bryson book) action selection is the art of doing the right thing at the right time, a task that requires the assessment of available alternatives, executing those most appropriate, and resolving conflicts among competing goals and possibilities. Action selection, at most basic, is the problem of deciding what to do next. Let us assume that an intelligent agent is capable of two things: selection processes for actions and adaptation and learning to improve its behavior. Specific behaviors and actions may compete for resource allocations. Some kind of constraint or bias is needed to guide search. The same holds true for how agents learn from the environment as well. Combinatorial restraints on considering, executing, and learning from actions all are a feature of intelligent behavior.

An important thing to note is that if we are explicitly modeling a community that is engaged in an adversarial interaction with another community, the two questions mentioned at the start of this section are revealed to be the same question at multiple levels of abstraction. Bryson and Brom note that in nature, an action could feasibly range from “contracting a muscle to provoking a war.” It could be highly distributed in nature (as with social insect colonies) or a couple of special purpose modules. Both individual agents and organizations have action selection mechanisms. The primary questions for those not looking to engineer more effective agents (biologists and ethologists), as Bryson and Brom observes, are the following.

  1. How do various types of animals constrain their search?
  2. Do all animals use the same approaches?
  3. Why do they use the ones they do?

A specific way of looking at this lies in designing experiments around interactions between the task, artifact, and environment. According to Cohen’s book Empirical Methods for Artificial Intelligence, an program’s behavior is a product of the interaction between the program’s structure, the task it is performing, and the environment it is performing in (an idea seen in robotics and other theories agent-environment interfaces). In short, Cohen argues that the basic task-artifact-environment triad yields several basic research questions:

  1. How will a change in the program’s structure affect its behavior given a task and the environment?
  2. How will a change in the program’s task affects its behavior in a particular environment?
  3. How will a change in the program’s environment affect its behavior on a particular task?

The agent’s “structure” in Cohen’s book, as seen in the language used, is a mechanism for generating behavior. Specifically, a shared topic in many of the disciplines relevant to Cohen’s book is how rational behavior is generated by an agent, a process of mapping both internal environmental and internal agent factors to outputs in the task environment. The notion of rationality has moved from its former place in describing rational thought to a broader science concerning how an agent produces rational behavior in a complex world. How rational are agents? How ought we to study or characterize rationality?

Russell and Norvig propose that a rational agent does the right thing, in light of an performance measure and a evaluation period. An agent need not be omniscient. Rationality is concerned with the expected success given what has been perceived. For each possible percept sequence, an ideal rational agent should maximize its performance measure on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Anderson proposes a principle of rationality — a cognitive system optimizes the adaptation of the behavior of the organism and rational analysis may be used to design a computational model of how the agent accomplishes the task. A similar assumption is Newell’s principle of maximum rationality — that if an agent has knowledge that one of its actions will lead to one of its goals, the agent will select that action.

The Russell and Norvig characterization, however, immediately suggests limitations on the effectiveness of rational behavior that they both outline. A partially observable environment may impose limits on agent sensors. A non-deterministic environment is one in which an agent’s actions have probabilistic outcomes. More complications follow. Does an agent need to think ahead, or is its perception of the environment episodic? Does the environment change when the agent is deliberating? Are there a limited number of percepts and actions or is it continuous? Does it include other agents? Finally, what about the agent’s ability to compute actions and the costs of such computations? Another, more abstract, socially laden problem is simply that knowing what it means to compute a rational action may necessitate knowledge of Geertzian “thick” social context.

While the prior set of problems focus on the issue of how an agent may make rational decisions in a tough environment, the elephant in the room (though not surprising to anyone that has programmed computers or made incorrect decisions due to hitting a cognitive upper limit) is the issue of how costly computations for rational actions may be. If we assume that computing actions is costly in both computation and time and that agents find various simplifying mechanisms, this takes us away from the “substantive rationality” seen in typical formal models in the social sciences.

Simon assumes that agents behave in a manner that is nearly optimal with respect to its goals as its resources will allow. Simon called for a science of “procedural rationality” focusing specifically on how agents perform complex tasks in the face of limitations on processing power, focusing in particular on efficient knowledge representation and search in everything from from organizational decision procedures to chess. Zilbertstein’s Bounded Optimality proposes a set of algorithms to realize bounded rationality. Finally, theories of naturalistic decision making, adaptive toolbox, evolutionary psychology concepts of adapted minds (and their spinoffs in other evolutionary explanations for behavior), and the heuristics and biases literature all focus on variations of this basic problem.

Taking a step back, Gershman, Horvitz, and Tenenbaum argue that the sciences of artificial intelligence, cognitive science, and neuroscience are converging around a larger science of “computational rationality”: identifying decisions with the highest expected utility while taking into consideration the costs of computation in complex real-world problems in which most relevant calculations can only be approximated. Maximizing some measure of expected utility is a general purpose ideal for decision-making under uncertainty. It is also nontrivial for most real-world problems, necessitating approximation. The choice of how to best approximate may be a decision subject to expected utility itself, as thinking is costly in time and other resources. Perhaps intelligence may be knowing how to best allocate such resources. Gershman, Horvitz, and Tenenbaum note that the idea of computational rationality has played an important role in linking models of biological intelligence at the cognitive and neural levels by exploiting the idea of guiding actions by expected utility.

This linkage tracks with a general emphasis on a more sophisticated way of explaining how a complex organism generates behavior. As Pfeiffer and Schneier note, rational thought concerns the mechanisms within the agent and rational behavior the agent’s interaction with the environment. Rational thought is not necessarily a pre-requisite for rational behavior. While this may seem like a piddling distinction, it is in fact an enormous one. McFarland and Bosser observe that all complex organisms are mixtures of automaton-like and more deliberate action generation mechanisms.

McFarland and others argue that an agent capable of handling multi-task control problems will defy many shibboleths of traditional ideas in artificial intelligence and cognitive systems. Like others they are interested in rationality, but view the problem as one of how a complex agent produces behavior. Behavioral outcomes are observable, but we have no idea whether the agent’s thoughts are rational. as Pfeiffer and Schneier note, we can identify costs — the cost of being in a state, the cost of a particular behavior, and the cost of changing between behaviors. The agent has no information about the real cost involved. Nor does the decision mechanism have to be explicitly represented. Instead, motivational autonomy may help the agent decide between automaton-like and more costly cognitive processes.

Conclusion and TBD

Obviously the research question is still very vague, and when one looks at the motivators for the research question, the problem description, and the two applicable literature sections we still an enormous and intractable amount of material for actually producing basic research. Part of the benefit of this, however, is simply getting it all out here so I can both narrow the scope of what I was writing earlier while still avoiding premature optimization given the enormous amount of possibilities.

Thus in terms of what kind of specific questions to ask and experiments to design I still have a long way to go. I need to specify a much more narrow class of situations and problems that these overall questions pertain to. When I do that, more specific research questions will become clearer as well as the literature that is most relevant to the problem being analyzed. However, I do feel like this post is nonetheless a big achievement. I have, in my own various tinkering, projects, and experiments, generated research questions but I have never really been satisfied with their ability to characterize my interests.

Here, I finally have something basic and broad out that I’m comfortable with and can sharpen for the rest of my time as a researcher. For my doctorate I am obviously going to be much more specific, but drilling down from a general research question is much easier than not having one at all.

--

--

Adam Elkus
Strategies of the Artificial

PhD student in Computational Social Science. Fellow at New America Foundation (all content my own). Strategy, simulation, agents. Aspiring cyborg scientist.