Seven Reflections on Behavioral Science for Social Change

I’m writing to share some of what I learned over the last few months about applied behavioral science (a.k.a. nudging, behavioral economics, or behavioral insights). In other words, how can we take what we know about human behavior and put that toward improving wellbeing?

Overall summary: I see the application of behavioral science as a promising tool in the toolbox of “what works” social policy. Behavioral science encourages government and social sector leaders to experiment and therefore know more about what is or isn’t effective. Furthermore, it is focused on the level of people’s true behaviors, which is a level that is often overlooked in policy or program design. Of course, nudges are not a panacea that will solve all of our social ills. They are often limited in application, and they may only have short-term effects (an area for further study).

About my experience: In fall-winter of 2015–16, I took an “immersive field class” on behavioral insights. Our class of roughly 50 students worked with government agencies in the UK and the Netherlands. For my team’s project, we consulted with the tax collection agency of the Netherlands, Belastingdienst, on nudging more taxpayers to use an online messaging service instead of paper mail. Additionally, I am drawing on my experience attending Behavioral Insights Study Group events at Harvard and discussing these with classmates. In these meetings, I’ve seen presentations from researchers (such as Sendhil Mullainathan and Katy Milkman) and practitioners (such as staff from ideas42).

Seven reflections:

  • 1. Start by thinking behaviorally: We are learning how to re-evaluate our thinking about human behavior. We often rely too heavily on assuming that people are rational. Books like Nudge and Thinking, Fast and Slow have pushed the cultural conversation beyond that assumption, and we are beginning to see what this means for the design and implementation of social programs. I would argue that too often, the social sector misses opportunities to practice “thinking behaviorally,” failing to employ user testing or human-centered design as ways to learn what people really do. These practices are particularly important in government and nonprofits, where feedback loops between success and program design are commonly fractured.
  • 2. It’s all about the experiments. Behavioral science hinges on experimentation because it is born out of the claim that we often don’t know what people will really do. Therefore, whatever we suppose will work, we need to test it to see if what we predict is what happens. In working with the Netherlands tax agency, we advised them to test different approaches to increased sign-ups through a randomized evaluation. This reliance on experimentation is a good reason to promote behavioral science, as behavioral science can function as a wedge to encourage rigorous testing of social programs more broadly.
  • 3. Intermediaries, such as the Behavioral Insights Team and ideas42, are driving this work forward. The Behavioral Insights Team was incubated in the UK Cabinet Office, and has recently spun out as its own for-profit social purpose corporation. Intermediaries run experiments to test new program design features, seeking to determine what will work best. They are doing this for government and for nonprofits alike. For example ideas42 ran a project seeking to nudge people toward better college sign ups in partnership with the US Department of Education and a set of social sector partners. The intermediaries are more R&D focused than typical social sector evaluators such as MDRC, and they are heavily field-focused, unlike much academic research.
  • 4. It is hard to run experiments in government. There are a number of reasons — none of them too surprising — that make it hard to persuade government decision-makers to experiment. The ones I have noticed are: lack of technical skills to set up an experiment; risk aversion (i.e., what if the program doesn’t work?); ethical concerns about delivering different services to different populations; and lastly a general status quo bias (i.e., why do things differently than we did yesterday?).
  • 5. Reactions to applied behavioral science can diverge strongly, and the reality is somewhere in the middle. Some people react saying “No way, that is mind control.” We met with a UK Member of Parliament who said as much, rejecting behavioral science on philosophical grounds that it imposed on people’s liberty. Others react oppositely, more along the lines of, “Yes, behavioral science will be our savior!” expecting it can make the impossible happen, such as making everyone wildly enthusiastic about retirement saving. The reality is somewhere in the middle. Once we explained to the MP what examples of nudges we were considering, he opened up to it. On the other side, people who expect nudges to lead to huge changes in outcomes will be disappointed.
  • 6. We still need a better understanding of behavioral science mechanics. We don’t necessarily know how a certain nudge works. In other words, we don’t know what happens to cause some small percentage of people in the population to act differently. For example, why does it work for those people and not others? Similarly, we need a better understanding of the persistence of nudges. Many hypothesize that nudges will only work in the short term, and the effect will fade over time. If that’s the case, the field may hold less promise than it seems now.
  • 7. We also still need a better understanding of broader behavioral science implications. At the moment, much of the success of behavioral science is around the edges of social programs, making marginal improvements in uptake rates, for example. Some of what we know about ways that people are “predictably irrational” should also likely affect how we design and operate our social and political institutions. How should we think about jury composition, given that we know groups often make decisions in line with existing prejudices? Or how should we think about election cycles in a representative democracy, if people are likely to bias shorter-term concerns over longer-term planning?

This isn’t meant to sum up the entire field of behavioral science. It’s instead a way for me to capture and share a few lessons from my experience here and to offer some guidance for people thinking about how the field will play out going forward. Thanks to Phil Ames and Andrew Levine for helpful comments.