Further Reflections on Terrorism and Risk

Adam Elkus
Rethinking Security
5 min readNov 25, 2015

I will add a small addendum to the post on terrorism and risk communication.

First, Cheryl Rofer, a contact with an advanced understanding of the risk communication literature and a large amount of practical experience with public health risks, reminded me that I was unintentionally paraphrasing the “information deficit” model of risk communication. As I want to give credit when credit is due, I should explain this model and why it has failed. The information deficit model, as presumed by the title, presumes that the problem with public ignorance of scientific facts is a lack of information. Hence, the problem can be rectified by experts transferring information to non-experts. Unfortunately, there is little empirical evidence that this is a successful communication strategy and it has largely been abandoned after two decades of attempts to empirically validate it. There is, of course, much less consensus as to what should replace it. I myself am partial to the cultural cognition school, but there is really no consensus on how to communicate risk and expert knowledge. Moreover, as Lawrence Freedman has noted, there are special challenges that exist in terrorism risk communication that do not exist elsewhere.

But that’s not really the point of this post. The larger issue some extent I am wondering if probability and risk is really the right frame to think about terrorism, period. Another obvious problem with Bob’s communication strategy beyond the issues described in the previous post is the way in which it frames the phenomena of interest. Patrick Porter wrote a good article about strategy and discretion in the shadow of World War II. As it turns out, policymakers had far greater choice in WWII than popularly believed. Other good takes on this have been written about the way in which US Cold War strategy prioritized the liberal state even at the cost of greater risk. A particular case study involving the war on terror also illustrates this principle. In my prior blog, I argued that tradeoffs and frank talk about costs and benefits may be useful. Some argued to me offline that that this would be just as ineffective as talk about probabilities. There is, however, some evidence that such a strategy may have some merit.

Kroenig and Strowsky, evaluating post-9/11 counterterrorism, argued that

The shock of war is thought to be closely associated with the growth of the state, in the United States and elsewhere. Yet each proposal to significantly expand state power in the United States since September 11 has been resisted, restrained, or even rejected outright. This outcome — theoretically unexpected and contrary to conventional wisdom — is the result of enduring aspects of America’s domestic political structure: the separation of powers at the federal level between three co-equal and overlapping branches, the relative ease with which interest groups access the policy-making process, and the intensity with which executive-branch bureaucracies guard their organizational turf. These persistent aspects of U.S. political life, designed by the nation’s founders to impede the concentration of state power, have substantially shaped the means by which contemporary guardians of the American state pursue “homeland security.” War does make the state, but not as it pleases. Theoretical approaches to state building should recognize that domestic political institutions mediate between the international shock of war and domestic state building.

I think that the argument is made too strongly. I would be more comfortable viewing this in terms of a constraint on US state expansion rather than rejection, per se. To compare, Soviet forces intended to develop an optimal nuclear strategy, but Andrew Marshall noted that the optimality of this strategy was bounded by the constraint of bureaucratic politics. Still, the constraints on the growth of a homeland security state have been substantial. Americans, for example, will likely never create a domestic intelligence agency or adopt the kind of policing and domestic intelligence structures seen in Europe. Why?

The problem with the risk frame is that it is militantly anti-Clausewitzian. Decision-makers and the public weigh alternatives, however crudely, based on a crude but nonetheless real set of preferences about what they want and what they are willing to pay to get it. This is not decision-making based on risk, though it does almost certainly involve subjective perception of risk. It’s decision-making based mostly on value-rationality. This has been known to many historians and scholars of national security, given that “security” itself is subjectively defined. So the public doesn’t see the problem in terms of risk, per se. They see it in terms of a hostile force making war on them that they expect some form of protection from. However, they are not so covetous of protection that they will assent to any form of it, as the Kroenig and Strowsky article indicates. They want decisions based on what they believe achieves their own weighting of cost and benefit, however fuzzy.

For the Bobs of the world, all of this is potentially helpful for getting their message across. Despite a lack of consensus as to the best way to talk about security and risk, one thing we do know is that experts trying to forcibly “inject” knowledge into notional Joes is not working and represents a manifestation of an old and discredited communication strategy. If the public were purely ruled by fear of terrorism, Kroenig and Strowsky argue, they would have assented to an unbounded rise in the power of the state to protect them. But they have not, for varying reasons. Kroenig and Strowsky attribute this to the nature of the US political system and its enduring values and structures. Perhaps this would lend some support to a cultural cognition view of terrorism risk communication. An example of this research can be seen here:

Why do members of the public disagree — sharply and persistently — about facts on which expert scientists largely agree? We designed a study to test a distinctive explanation: the cultural cognition of scientific consensus. The “cultural cognition of risk” refers to the tendency of individuals to form risk perceptions that are congenial to their values. The study, published in the Journal of Risk Research, presents both correlational and experimental evidence confirming that cultural cognition shapes individuals’ beliefs about the existence of scientific consensus, and the process by which they form such beliefs, relating to climate change, the disposal of nuclear wastes, and the effect of permitting concealed possession of handguns. The implications of this dynamic for science communication and public policy-making are discussed.

None of this should be construed as to say that probabilities are not useful whatsoever. But we should also not confuse evidence that people can use to make claims or explanations underlying claims with statistical data or probabilistic claims more broadly. People make all kinds of decisions by fusing together multiple forms of evidence, from testimonial evidence to purely statistical evidence. There is a place for probabilities, certainly, but it’s not clear that they have to be represented numerically. People have qualitative, not quantitative ideas of what they want to achieve and what they are not willing to bear. This is why intelligence analysis often conducted using qualitative probabilities rather than quantitative ones — there is “high confidence, low confidence,” etc instead of giving an estimate that satisfies the Kolmogorov conditions on a 0–1 continuum. Likewise, it’s more plausible that people see things that they want to achieve and want to prevent in terms of discrete or categorical representations. There is certainly a place for numerical probabilities in particular but not as the dominant method of communication.

--

--

Adam Elkus
Rethinking Security

PhD student in Computational Social Science. Fellow at New America Foundation (all content my own). Strategy, simulation, agents. Aspiring cyborg scientist.