The Problem With National Security Concepts

Adam Elkus
Rethinking Security
17 min readDec 30, 2015

This essay briefly examines some fundamental problems in defense and security analysis and conceptualization (henceforth referred to jointly as “national security”). In the past, national security thinkers have noted that the security community has systematically failed to produce clear, useful, and analytically defensible concepts. [1] However, the production of fads and buzzwords has yet to be fully explained. To understand and counter the production of problematic theories and concepts, it is important to explain the challenge of security analysis and the ways in which buzzwords and fads represent maladaptive responses to the task.

First, the challenge of security analysis is explained in detail. Security analysis is difficult for many reasons, the most prominent of which is the assessment of opponent goals and strategies from empirically observable behavior. Because the enemy’s behavior — — not the strategy and higher goals associated with it — is visible, the temptation to make fallacious inferences from this minuscule observation is great. Sound concepts and ideas can help mitigate this problem.

Second, the sources of conceptual error are broadly outlined. Two types of errors in particular are highlighted: instrumental justifications for flawed ideas and the failure to re-use sound theories and historical knowledge. Security thinkers often justify an idea because it is instrumentally useful for some other thing they seek to achieve even if they cannot otherwise defend it.

While this piece is related to arguments over the “gray zone” concept, it should be understood as a companion piece to my War on the Rocks essay on the subject.

The Challenge of Security Analysis and the Utility of Security Concepts

Security analysis often focuses on the decision-making of political communities, broadly construed. Intelligence analysts, for example, evaluate the capabilities and intentions of adversaries. In particular, the idea of strategy is a powerful explanatory device that security analysis structurally relies on. In making strategies of conflict actors the unit of analysis, analysis achieves a unique combination of parsimony and explanatory power for explaining complex behaviors. However, as will be explained, this parsimony also comes at a significant cost.

The image below is an (purposefully absurd) imagination of how security thinkers often interpret the behavior of conflict actors. When they see a behavior of interest, they assume that the actor in question that produced the behavior did so because the behavior was the result of a deliberate strategy.

How we think about strategy, pt. II

At simplest, explanations about strategy resemble something along the lines of the following: Actor A’s behavior is explained as a product of strategy S that accomplishes goal G. If A fails to achieve G, it is because A could not ensure that B → G. In observing A’s behavior without the benefit of historical hindsight, we can only often make crude guesses about the underlying strategy motivating them, the goal the strategy is trying to achieve, and any meaningful additional complicating factors that might explain how A conceptualizes G, S, B, and the linkages between them.

In short, the idea of strategy taking the form G → S → B → G provides simplicity but founders on the problem that when we observe a foreign opponent we are seeking to analyze, we are really only fully observing B. Everything else we approximate, infer, or otherwise guess at. Some of the most historically problematic assumptions are briefly enumerated:

  1. The observed behavior reflects a certain enemy goal.
  2. The observed behavior can accomplish the enemy goal.
  3. The enemy believes that the observed behavior can accomplish their goal.
How we think about strategy part II

To understand a bit of why all three assumptions can be misleading, consider that we once believed that angry Norse gods caused lightning strikes. [2] Human beings use causality to make sense of the world, and without such a capability we would likely be incapable of dealing with everyday life. [3] In strategy in particular, we often develop folk psychology models of our opponent’s thinking and decision process intuitively. [4] However, these internal models and attributions are often as misleading as the idea of angry Norse gods hurling lightning bolts. Much of what we now know about our Cold War adversaries suggests that what we believed about them at the time was blatantly wrong. [5]

A related problem is what statisticians call “overfitting” and “selection on the dependent variable.” Imagine that you are trying to understand why civil war and insurgency happens in the fictional People’s Republic of Zamboogistan. You have a small, though usable, data sample of all of the cases of civil war and insurgency in Zamboogistan. You create an explanation that includes all possible relevant variables and only considers the cases in which civil war and insurgency occurred in Zamboogistan. What you have done wrong can be explained as the confluence of two basic errors.

  1. Your explanation only describes the data sample you’re considering, and may not have much power to explain other data or predict new Zamboogi cases of civil war and insurgency.
  2. Your explanation only explains why civil war and insurgency happened, but it cannot explain cases in which civil war and insurgency did not happen.

Security theory and concepts can help us avoid such errors by disciplining our thinking. [6] Bernard Brodie argued that disciplined thinking could help us avoid the resort to clichés and fallacies that arise from parochial biases of practitioners. [7] Richard K. Betts has also asserted that useful theory and concepts can help us understand the ways in which actors utilize force and coercion. [8] Finally, Colin S. Gray has persuasively stated that useful concepts and theories can help us understand “continuity in change and change in continuity.” [9] Gray’s argument itself deserves some sustained elaboration, and to explicate precisely what continuity in change and change in continuity means, the next section uses techniques drawn from software engineering to detail how security concepts can be understood as conceptual models.

The fields of knowledge representation and system modeling in computer science and informatics often utilize conceptual models to represent a domain of interest. For example, the Unified Modeling Language (UML) utilizes a model of a domain called a class hierarchy. The modeler divides the domain into objects with attributes and operations. A PhD student’s attributes, for instance, can be represented in terms of grade point average, completed academic requirements, and remaining academic requirements. Their operations could be modeled as the ability to take courses, publish papers, and consume junk food and caffeine.

The genius of the class hierarchy is that one can represent a domain in terms of composition and inheritance. A complex object could be represented as the composition of less complex concepts. And, most critically, one can specify superclasses that, like parents, are capable of passing on their characteristics to subclasses. Both notions have some utility for military theory. Instead of reductively looking at the components of a security problem, we can look at how those components give rise to a composite form. And instead of perpetually reinventing the wheel conceptually when we are faced with a security problem, we can show how the problem extends something we are already familiar with.

To understand why inheritance is such a powerful concept, consider this crude diagram of Carl von Clausewitz’s general theory of war. The solid arrows point from subclasses to a parent superclass. Each subclass of the superclass inherits the parent’s characteristics, such as the Clausewitzian notion of war as the violent expression of politics and its core attributes being the play of chance on the battlefield, the attempt to instrument violence to policy, and the passion and enmity that often accompanies political violence.

An inheritance hierarchy of war theory

Because we understand the fundamental characteristics of war as being a combination of the play of chance on the battlefield, the attempt to subordinate violence to policy, and the passion and enmity inherent in any armed struggle, and know that war is a process that transforms politics into violence and involves the imposition of one’s will on an opponent, we can extend this understanding in multiple ways without confusing ourselves. Certainly theories such as those espoused by Clausewitz are not a panacea or a substitute for other forms of security and defense knowledge. However, they show the advantage of having a solid base of understanding to build off of when analyzing highly specific military and security problems.

For example, naval warfare is described by domain-based theories of war such as those of Alfred Thayer Mahan and Julian Corbett. The unique characteristics of the sea may make for distinct dynamics when compared to operations on land, but it inherits the general attributes and functions of war as described by Clausewitz more abstractly. Additionally, today’s warfare might involve information technology to a degree that might have surprised soldiers even decades ago, but that technology amplifies existing attributes of war such as passion and enmity by giving it a different type of outlet. Finally, we can uniquely describe the strategic culture of al-Qaeda while noting that it still fits the general outlines of what Clausewitz originally described. After all, for example, it still involves the imposition of will on opponents and it is a violent expression of (religious) politics.

Sometimes history and theory can be a straitjacket that prevents the analyst from recognizing qualitative change, and certainly many that say “___ warfare is not new” can sometimes be remarkably close-minded. But anyone seeking to understand the complex ways of war on today’s battlefields can also understand the necessity of not starting from scratch and building off something that is both internally consistent and has demonstrated historical validity, as Clausewitz’s conception of war and politics does.

Why Security Concepts Go Wrong

Having explained the challenges of security analysis and the utility of conceptual models, it is now time to describe why security thinking often goes wrong in practice. In essence, the failure of security thinking can be summed up by two types of errors:

  1. Instrumental justification of flawed ideas.
  2. Failure to pay heed to existing knowledge and theory.

The first way that security thinkers often depart from the sound development and usage of concepts is to justify bad ideas purely instrumentally. Often times, bad ideas are justified because they perform some exterior function of perceived value. For example, the theory of Fourth Generation Warfare (4GW) has often been attacked as historically illiterate and theoretically incoherent. Defenders of the concept responded by arguing that, while the theory many not be defensible on its face, it was nonetheless useful for triggering useful organizational, doctrinal and strategic reforms in the US military. [10] The idea, in other words, is not intended for academics and theorists but is a practical vehicle for accomplishing some important goal.

On its face, this logic is hard to argue with. Obviously, the United States faces many security challenges and ideas that have practical value in combating them are the goal. Additionally, one cannot deny that the rigidity of existing security institutions and doctrines are barriers to combating these threats. The defender of concepts such as 4GW also often retorts that they are doing creative and useful work and that critics are narrow-minded martinets out to stifle creativity, innovation, and intellectual synthesis. However, a look below the surface suggests that these justifications are both weak and dangerous. Certainly security thought cannot rise to the accuracy and precision expected of the sciences. And certainly there is also value in creative thinking, conjectures, thought experiments, brainstorms, and other spurs to creative thinking. However, we must also remember that men and women’s lives are at stake.

If concepts are used as inputs to consequential decisions, they ought to be justified by something stronger than a vague argument that the concept is necessary to spur innovation and organizational change. A doctor that utilizes a theory or idea known to be wrong because he believed it was necessary to shake up the rigid organizational culture at the hospital would almost surely incur severe consequences. An engineer that built a bridge according to a design that was known to be questionable because she thought that it was necessary to spur much-needed changes in the civil engineering field would — — at the very minimum — never build another system again. It is not clear why law enforcement, foreign intelligence operations, and military tactics and strategy are somehow exempt from such moral considerations.

Additionally, the danger of instrumental justifications is their subjectivity. If a theory is justified due to the subjective perception that it will lead to some organizational outcome of value, what is to stop theories from being cynically used to acquire bureaucratic advantage? What is to stop the analyst from saying “well, I may not be able to really justify this idea but I know it will get my organization funding and enhanced powers and authorities” or some variant of that sentence? Certainly all organizations must beg for their bread, and are entitled to use all manner of public and private persuasion to do so. However, when organizations develop flawed security concepts simply to get ahead of their bureaucratic rivals, they not only mislead their political masters and policy audiences, but also promulgate ideas that can only be dislodged after a laborious and painstaking effort.

The reason why it is hard to get rid of bad ideas is that they often change form to protect themselves from criticism. Another feature of 4GW and other related ideas was that they constantly shifted to accommodate new facts or protect themselves from criticism. [11] When an intellectual regime’s main focus is simply protecting itself from criticism or mindlessly altering the theory to accommodate new facts, it becomes “degenerative” in that it fails to yield new applications and discoveries. [12] Some of this can be avoided by formalizing the concept, making its interpretation as unambiguous as possible. For example, Clausewitzian theory is remarkably insistent on the idea that “absolute war” is an ideal-type. In the real world, there is always some constraint on how war is waged. Thus, we should then expect as an implication of this concept that the ability of an actor to achieve their strategic objectives is always constrained in some shape or form. [13] Because of this, we are able to connect the concept to real world events that we can observe. Formalization, however, need not necessarily entail predictive quality. Benefits of formalizing concepts include the generation of useful new areas for research and analysis, the organization of empirically observed correlations and facts, the testing of often ambiguous ideas for hidden assumptions and problems, and other myriad achievements of value. [14]

Another source of error can be found in the failure to re-use old concepts and learn from history. Chasing after a new fad or buzzword when an old concept can help us understand perfectly fine is a common and depressing tendency of the defense community. It also sadly implies a larger failure to learn from history and re-use useful ideas and understandings.

The so-called “new wars” theory that was briefly popular in international studies ignored older ideas with more explanatory coherence and power in the rush to declare that the “new” wars were qualitatively distinct. [15] When an old idea will do, is there a reason to invent a new one other than self-indulgence? Of course, we should not pretend that the respective analytical utility of the old and new ideas are easy to evaluate. But some evaluation ought to be conducted prior to tossing the old literature overboard to jump on the latest mil-intel bandwagon. What is deficient with the old idea? What does the new one add? Too often, these considerations are simply ignored. And this ought to be understood as detrimental to both policy concerns and intellectual progress.

In policy, the constant invention of new ideas impedes long-term strategy and planning. How could it be otherwise, in an environment where institutional intellectual memory is sparse to nonexistent? The American security community, like the hapless protagonist of the 2000 movie Memento, is hobbled by a fundamental lack of the ability to store and re-use long-term memories. Short-term memory is sufficient for low-level tasks, but cannot provide a foundation for any endeavor of nontrivial importance. But this merely scratches the surface of the entirely self-inflicted damages caused by a yen for constantly shifting fads and buzzwords.

Imagine that you invented an entirely new vocabulary and terminology for driving every time you stepped into the driver’s seat. Chances are, you would find it hard to learn how to drive to begin with, pass a driver’s exam, or get where you needed to go. Both declarative (facts about the world) and procedural (knowledge about how to perform tasks) knowledge is cumulative in nature. And, most importantly, knowledge is also a network of linked abstractions. The philosopher W.V. Quine famously said that theory is founded in a “web of belief” about the world. [16] This makes the re-use of previously useful ideas essential when possible; and the ability to understand connections between components of belief non-negotiable if the policymaker hopes to understand and control the use of violence and coercion.

Lastly, a climate of constant intellectual revolution impedes long-term intellectual progress. Despite advances in history and social science, we still know pathetically little about the world of conflict and competition. To borrow a famous phrase from the popular scientist Carl Sagan, much of the security problems we encounter occur against the backdrop of a “demon-haunted world” dominated by fear, conjecture, and superstition. By increasing our knowledge, we can light a progressively brighter candle to illuminate the darkness, render the illegible legible, and cast away the shadows on the walls. [17] The catch, however, is that this requires cumulative knowledge. Clausewitz depended on the existence of a prior intellectual tradition of some renown to develop his theories of war. [18] Fighter pilot and strategic theorist John Boyd famously and voraciously consumed work in everything from scientific epistemology to cutting-edge research in cognitive science and physics. [19] If the goal is truly innovative and disruptive thinking, we would do well to take seriously the idea that one of the core mechanisms of creativity is the recombination of prior knowledge and structures into useful new components. [20] This is impossible when ideas are constantly being generated and thrown overboard with little opportunity for them to set in.

Take, for example, the concept of “gray zone” strategies. Michael J. Mazarr argues that actors pursue so-called “gray zone” strategies that sit somewhere in the middle between war and peace. Mazarr acknowledges that components of gray zone strategies are not new, but nonetheless argues that the gray zone warrior is distinct because they pursue integrated campaigns that rely on new, mostly nonmilitary, tools to achieve strategic objectives while remaining under the threshold of escalation through gradual campaigns.

Entering the gray zone

However, it is still not clear what value this conceptualization adds. After all, is not the use of “integrated campaigns” to achieve desired objectives the stuff of basic statecraft? And another older term, “political warfare” — while flawed in its own respect — has at least been used since the Cold War to describe “gray zone” activity.[21] How, exactly, is the use of quasi-military and non-military tools novel enough to merit recognition as a distinct form of strategy? And the idea of gradualism as meriting its own unique set of terminology is simply farcial. Why is “gray zone” strategy a more useful concept than Fabian strategies, strategies of erosion, cumulative strategies, and any of the other numerous terms used to describe both historical and present efforts by actors to achieve goals gradually rather than all at once? [22]

It would be one thing if gray zone theorizing added a new way to look at all of these historically observed tendencies. But Mazarr instead asserts that gray zone strategies are unique and novel because they are more coherent and intentional than their past antecedents. It is not clear, however, that past interstate conflict and competition has lacked coherence or intentionality. The funny thing is that two of Mazarr’s chief examples — China and Russia — have have integrated varying types of operations into their efforts due to their conception of fighting a totalized ideological conflict. The irony of Mazarr’s insistence about the novelty of struggles waged in a hazy borderland between war and peace is that neither Moscow or Beijing have ever really recognized such a qualitative distinction between the two. Most of what “gray wars” really tells us is that…..Communist and former Communist states unsurprisingly hew to Lenin’s inversion of Clausewitz: politics is war by other means.

Of course, what Beijing and Moscow think about politics and war does not have to determine American decision-making. We could set the threshold for escalating in our own way at varying levels depending on our objectives and tolerance for risk. The real skill that gray warriors have is a very old one — knowing the right level of misbehavior that one can get away with without triggering costly punishment. But that’s also something most children learn how to do to their parents at an early age. It is hard to see how we can benefit from prior knowledge if we continuously invent new abstractions for things — such as the components of “gray zone” strategy — that we already understand reasonably well.

Conclusion: The Danger Zone

Without sound national security concepts, we fight blind and dumb. However, sometimes faddish and buzzword-like concepts that are inferior to older, more well-established ideas are worse than nothing at all. Concepts such as 4GW and “gray zone” strategy add little to our collective knowledge and do not help us solve the problems we face.

Notes

[1] See, for example, Betz, D. J., & Stevens, T. (2013). Analogical reasoning and cyber security. Security Dialogue, 44(2), 147–164 and Owen, William F. “The war of new words: Why military history trumps buzzwords.” Armed Forces Journal 9 (2009).

[2] Schrodt, P. A. (2014). Seven deadly sins of contemporary quantitative political analysis. Journal of Peace Research, 51(2), 287–300.

[3] Sloman, S. (2009). Causal models: How people think about the world and its alternatives. Oxford University Press.

[4] Thagard, P. (1992). Adversarial problem solving: Modeling an opponent using explanatory coherence. Cognitive Science, 16(1), 123–149.

[5] See, for example, Stewart, G. C. (2014). Hanoi and the American War: Two International Histories. Cross-Currents: East Asian History and Culture Review, 3(3), 275–285, Trachtenberg, M. (1991). History and strategy. Princeton University Press, and Press, D. G. (2005). Calculating credibility: How leaders assess military threats. Cornell University Press.

[6] Baylis, J., Wirtz, J. J., & Gray, C. S. (2013). Strategy in the contemporary world. Oxford University Press.

[7] Brodie, B. (1949). Strategy as a Science. World Politics, 1(04), 467–488.

[8] Betts, R. K. (1997). Should strategic studies survive?. World Politics, 50(01), 7–33.

[9] Gray, C. S. (2010). War-continuity in change, and change in continuity. Parameters, 40(2), 5–13.

[10] Echevarria, A. J. (2005). Deconstructing the theory of fourth-generation war. Contemporary Security Policy, 26(2), 233–241.

[11] Reid, Darryn J., et al. “All that glisters: Is network-centric warfare really scientific?.” Defense & Security Analysis 21.4 (2005): 335–367.

[12] Lakatos, I. (1976). Falsification and the methodology of scientific research programmes (pp. 205–259). Springer Netherlands.

[13] For an overview of this, see Wagner, R. H. (2000). Bargaining and war. American Journal of Political Science, 469–484.

[14] See Epstein, J. M. (2008). Why model?. Journal of Artificial Societies and Social Simulation, 11(4), 12, and Clarke, K. A., & Primo, D. M. (2012). A model discipline: Political science and the logic of representations. Oxford University Press.

[15] Schuurman, B. (2010). Clausewitz and the “New Wars” scholars. Parameters, 40(1), 89–100.

[16] Quine, W. V. O., & Ullian, J. S. (1978). The web of belief (Vol. 2). R. M. Ohmann (Ed.). New York: Random House.

[17] Sagan, Carl. Demon-haunted world: science as a candle in the dark. Ballantine Books, 2011.

[18] Echevarria II, Antulio J. Clausewitz and contemporary war. Oxford University Press, 2007.

[19] Osinga, F. P. (2007). Science, strategy and war: The strategic theory of John Boyd. Routledge.

[20] Haas, M. R., & Ham, W. (2015). Microfoundations of Knowledge Recombination: Peripheral Knowledge and Breakthrough Innovation in Teams. In Cognition and Strategy (pp. 47–87). Emerald Group Publishing Limited.

[21] Radvanyi, J. (1990). Psychological operations and political warfare in long-term strategic planning. ABC-CLIO.

[22] See, for example, Erdkamp, P. (1992). Polybius, Livy and the “Fabian Strategy”. Ancient society, 23, 127–47, Milevski, L. (2012). Revisiting JC Wylie’s Dichotomy of Strategy: The Effects of Sequential and Cumulative Patterns of Operations. Journal of Strategic Studies, 35(2), 223–242, Jones, A. (1996). Elements of Military Strategy: An Historical Approach: An Historical Approach. ABC-CLIO, Malkasian, C. (2002). A history of modern wars of attrition. Greenwood Publishing Group, and Hartigan, J. (2009). Why the weak win wars a study of the factors that drive strategy in asymmetric conflict (Doctoral dissertation, Monterey, California. Naval Postgraduate School).

--

--

Adam Elkus
Rethinking Security

PhD student in Computational Social Science. Fellow at New America Foundation (all content my own). Strategy, simulation, agents. Aspiring cyborg scientist.