The Science of Context

Understanding the real role of applied behavioral scientists

Jared Peterson
Behavioral Design Hub

--

I still cringe at my first attempts to design a behavior change intervention. It was like I had looked at a Behavioral Science textbook and thrown all my ideas at the problem, hoping something might stick. As a result, the solutions I developed were uninformed by the problem, the goal, and the constraints of the situation.

This is a common mistake for aspiring behavioral scientists, a group who I find often memorizes biases, heuristics, and nudges as if a collection of facts were the critical thing differentiating novices from experts. They approach the world of Behavioral Science thinking our primary role is nudge design, so when they get a client it is the first thing they do. Similarly, many clients think of Behavioral Science as "nudge."

Those in the field know this is inaccurate and a very narrow view of Behavioral Science, but we struggle to effectively communicate the true essence of what we do. I suggest an alternative way to view our field: Applied Behavioral Science is the science of context.

The Science of Context

Consider the following two critical pieces of evidence that have led me to this conclusion.

First, scientists model the objects that they study. Astronomers build models of planet systems, microbiologists build models of cells, and sociologists build models of society. If good enough, the models tell us how the object will change over time or if manipulated. The question is, what do we build models of in Applied Behavioral Science?

I believe we build models of the context. We capture the specific context with high enough fidelity to reason about how things might change if we manipulate the context in minor ways. It is a microcosm of science as a whole — we want to find a model good enough to let us reason how small changes will affect behavior. This typically happens in our field through behavior maps or even just mental models rather than a formal theory or mathematical model. Still, we are reaching for the same thing.

This is not to say no one studies nudges. Some academic Behavioral Scientists design experiments that carefully control for context so that they might better understand the psychological mechanisms behind a nudge and build models of those mechanisms. Similarly, many study and build models of cognition and decision-making. But that is not the world of Applied Behavioral Science, where the key thing we are trying to understand is not a psychological mechanism but the context as a whole so that we can better reason about which type of intervention might be most effective.

The key thing we are trying to understand is not a psychological mechanism but the context as a whole so that we can better reason about which type of intervention might be most effective.

Second, consider what implications we take away from a failed experiment. Imagine the Kenyan Public Health Agency tasked you to support efforts to increase child vaccination rates. You look to the literature, find the "authority bias," and decide you will try and use it as an intervention. You film a video of doctors in white lab coats encouraging parents to vaccinate their children and air it on TV. Now let's imagine that the intervention fails. Would you then conclude that the authority bias doesn't exist? That every commercial of a doctor recommending medication is ineffective?

No. You do not conclude anything about that particular type of intervention. You know the intervention has worked in other situations, and your faith in "authority bias" isn’t undermined.

So if your experiment didn't falsify the intervention, what was it trying to falsify? I argue it falsified your understanding of the context. Your conclusion at the end of the experiment is that your model of this particular context was incomplete.

If we build models of the context and then run experiments that tell us about that context, then isn't our main object of study ‘context?'

Modeling Context

By recognizing that we are testing our context model, we can better understand and communicate our value. We don't just design nudges. Instead, our skill is recognizing and articulating context with enough clarity that it illuminates how small changes (e.g., nudges) will affect behavior.

A well-known finding is that a good model (or frame) helps us to reason more clearly (Larkin, J. H., & Simon, H. A., 1987). The Candle Problem, Mutilated Chessboard, Number Scrabble, and other experiments show that a good frame makes solutions common sense. Consider whether you would navigate with a list of GPS coordinates or a map instead. Even if both representations have the same information, the map is the better choice. Navigating with a list of coordinates would be like a puzzle or game (see geocaching), whereas navigating with a map makes navigation simple (assuming you know how to read one). Once you have the correct information represented correctly, navigating a problem becomes simple. As Herbert Simon said, "Solving a problem simply means representing it to make the solution transparent," and that the solution to every problem is already implicit in the premises; it’s just a matter of finding the right way to represent what you already have (Simon, 1969).

It is the same with Behavioral Science. When we build models of the context (whether through behavioral maps or something else), we find a representation of the context that makes behavior change transparent. A colleague of mine once remarked that Behavioral Science often feels like common sense. But I would argue it is only obvious once you have begun to model (or frame) the context, which is what we start to do the moment we begin a project.

Returning to our Kenyan vaccination example. Our model of the situation was wrong in some critical way that made a commonly used nudge underperform. But once we can identify the relevant bit of context, the solution becomes more evident. We missed that doctors don’t wear white coats in Kenya; butchers do (See Jang, Saldanha, Singh & Adhiambo, 2022). The problem isn’t that Kenyans aren’t willing to listen to doctors, but rather, they were confused about why butchers were telling them to vaccinate. With that additional clarity about the context, we can design a better intervention using more culturally appropriate clothing and signs of (medical) authority.

Behavioral Science is not a collection of one-size-fits-all solutions; it's a continuous process of studying and adapting to unique contexts so that we can identify how a change to that context will lead to other changes. As the context becomes apparent, so do possible nudges as they become more specific and adapted. But when we fail to understand the context, we fail to understand how a change will affect the behavior of individuals.

As Cash et al. (2022) said of the Behavioral Science experts they interviewed:

There was a general consensus that the problematic behaviour was normally analysed in such depth that the intervention became obvious in principle yet required extensive creative design work and iteration in order to translate that principle into real interventions.

Applied Behavioral Scientists spend most of their time trying to understand and model a specific and unique context accurately and functionally (including the psychology, the behavioral journey, the constraints, the points of leverage, etc.). Once we find the right model and capture the relevant context, it is often quite apparent when specific interventions will not work and others will (unless we missed something) because our model allows us to reason how small changes will affect the rest of the system (Klein, 2007).

When we test an intervention, it is more a test of our understanding of the context than a test of the nudge. If the intervention doesn't work, it is not because "nudges don't work." Instead, we have failed to build an accurate model of the context that lets us reason about how changes will affect the behavior of individuals in that context.

When we test an intervention, it is more a test of our understanding of the context rather than a test of the nudge. If the intervention doesn’t work, it is not because “nudges don’t work.” Rather, we have failed to build an accurate model of the context that lets us reason about how changes will affect the behavior of individuals in that context.

Conclusion

"Context" is a tricky word, and I have been playing coy by not defining it. By some definition of "context," all scientific disciplines study "context." the rate of neurons firing could be context, as is the day of the week and the position of the planets. So what exactly do I mean by "context"?

The simple answer is this: anything relevant to behavior change is context. The more complicated answer is that what counts as relevant is constantly changing. Sure, we have models that help direct us to what is relevant (e.g., COM-B, 10 Conditions for Change, Health Belief Model, etc.), but as Vervaeke et al. (2012) argue, there cannot be a grand theory of relevance as it is impossible to a priori posit everything that could be relevant to behavior.

“[A]ny information that we find relevant does not form some stable or homogenous class. Things we find relevant one minute can be completely irrelevant the next. The classes of things we can find relevant are extremely heterogeneous. We can find things that happen on Tuesdays relevant or all white things relevant.”

Identifying what psychologies, constraints, sub-segments, cultural beliefs, behaviors, or other contexts will be relevant to a particular problem is hard, especially in foreign cultures and situations where we lack first-hand experience. This is why user research and testing are so central to Behavioral Science and always will be. We are always on the lookout to falsify our current understanding of what is relevant in a particular situation that may have yet to be relevant elsewhere. So we cannot solely rely on experience, theory, or past empirical work. You can never fully predict when some Supposedly Irrelevant Factor (Thaler, 2015) will be a crucial piece of context-driving behavior, such as the set of all Mondays or the color of a butcher’s coat.

In sum, Applied Behavioral Science is the science where we build models of a context with enough detail that it illuminates how small changes (e.g., nudges) will affect behavior. Our value is less in our clever nudges and more in our ability to understand relevant contextual factors. We are scientists of the particulars, identifying small distinctions, micro-frictions, and causal relations in the complex web of a particular context. By renewing our understanding of our role as students of context, we can escape the misperception that so many novices and clients have that what we do is design nudges, put the emphasis back on the science which drives so much of what we do, and understand our true value.

Please clap 👏👏 if you find this post helpful. Thanks!

Jared Peterson is the founder of Behavioral Change Expert where he merges Behavioral Science with the study of expert intuition (Naturalistic Decision Making) to improve decisions and change behavior. He has a Master’s in Behavioral and Decision Sciences from the University of Pennsylvania as well as a Bachelor’s in Psychology from BYU-Hawaii. He also splits his time as a researcher at Shadowbox Training. You can find him at www.behaviorchange.expert

References

Cash, P., Vallès, X., Echstrøm, I., & Daalhuizen, J. (2022). Method use in behavioural design: What, how, and why? https://doi.org/10.57698/V16I1.01

Jang, C., Saldanha, N.A., Singh, A., & Adhiambo, J. (2022). Implementing Behavioral Science Insights with Low-Income Populations in the Global South. In Mazar, N., & Soman, D. (2022). Behavioral Science in the Wild. University of Toronto Press. (pp. 277–283)

Klein, G., Phillips, J.K., Rall, E.L., & Peluso, D.A. (2007). A data-frame theory of sensemaking. In Expertise out of context: Proceedings of the Sixth International Conference on Naturalistic Decision Making (pp. 113–155), Expertise: Research and Applications Series. Mahwah, NJ: Lawrence Erlbaum Associates.

Larkin, J. H., & Simon, H. A. (1987). Why a Diagram is (Sometimes) Worth Ten Thousand Words. Cognitive Science, 11(1), 65–100. https://doi.org/10.1111/j.1551-6708.1987.tb00863.x

Thaler, R. H. (2015). Misbehaving: The making of behavioral economics. W W Norton & Co.

Herbert Simon, The Sciences of the Artificial, Cambridge, MA: MIT Press, 1969.

Vervaeke, J., Lillicrap, T. P., & Richards, B. A. (2012). Relevance Realization and the Emerging Framework in Cognitive Science. Journal of Logic and Computation, 22(1), 79–99. https://doi.org/10.1093/logcom/exp067

--

--