Image for post
Image for post

Is Applied Behavioural Science reaching a Local Maximum?

David Perrott
May 8 · 28 min read

This essay is part one of a series on the future of applied behavioural science. You can find part two here.

In this first piece, I’ll unpack the assumptions and reasoning that underlie my current view of applied behavioural science. In the second piece, I build on this view by proposing a set of paths to consider going forward.

The Shape of Progress

Applied behavioural science is the use of evidence-based insights, tools and techniques to solve real-world behavioural challenges. The field has seen enormous success in recent times, evident in the rate of global adoption over the past five years. Whether it’s public policy units, large corporations, consultancies or technology start-ups, teams are opening their doors to the behavioural sciences, and taking the insights it has to offer very seriously. This is a huge and well-deserved victory. After all, for the past decade, the field has been optimising for the functions of broad-scale buy-in, adoption and usage. But has all the attention to adoption come at a cost?

Unbalanced Drivers of Growth

The sustainable growth of a field requires horizontal adoption and exploitation of the validated technologies — in the case of applied behavioural science, this means frameworks, techniques, processes, theoretical models, methodologies and databases. Check! We’re doing well there. However, continued growth also requires a fruitful vertical exploration of new and innovative technologies, allowing for continued expansion over the long run.

Image for post
Image for post

My sense is that this ‘horizontal’ growth has been heavily relied on to fuel the field’s progress narrative, and less attention has been paid to the innovation dimension. You can easily see this narrative frame when you look at the way we talk about progress (number of RCT’s a team have completed, number of behavioural units or capability building programmes set up around the world, the geographic diversity of attendees at conferences, etc). I don’t mean to downplay the importance of horizontal growth. It is incredibly necessary. It just isn’t sufficient on its own. Without healthy movement upward, we will inevitably start closing in on a saturation point.

Incremental optimisation isn’t Innovation

To be clear, there have been many developments of the past ten years that have moved the needle on vertical growth. However, for the most part, these improvements have been in the form of refinement and incremental optimisation on what has worked in the past. Figuring out failure points, tinkering where there is room for iteration and patching up the holes as they appear. We’ve made our existing tools sharper, without adding new tools to the toolkit.

To provide a concrete sense of what I mean by this, consider progress along the technical dimensions of the applied behavioural science toolkit. Here are some examples everyone should be familiar with:*

Let’s start with methodologies. Many practitioners seem to still be leaning on some variant of Ideas42’s behavioural design methodology or the RCT protocol template that J-PAL built out years before that. Thaler and Bernatzi’s Save More Tomorrow, BIT’s tax payment messages, the Obama campaign’s implementation planning prompts and Opower’s social norming nudges are still the go-to case studies when you want to talk about the impact of behavioural science. Thinking, Fast and Slow, Risk Savvy, Scarcity, Predictably Irrational and Nudge are still the most frequent book suggestions for people who are interested in learning about the field. MINDSPACE (EAST’s predecessor),COM-B, the Behavioural Change Wheel and Cialdini’s Principles are still the frameworks practitioners lean most heavily on when making sense of potential interventions.

These early innovations have been incredible success stories for the field, and their continued legacy is testament to the impact they have had. What may be cause for concern, however, is that they were almost all created more than a decade ago. Why is that the case?

It is not like there hasn’t been talk of big innovation. There have been animated and exciting flirtations with neighbouring fields such as machine learning, systems thinking and the complexity sciences. Yet these sorts of collaborations seem to be perpetually more present in ‘future of the field’ slide decks than the real world. This may be less surprising once the limiting forces at play are understood.

*Disclaimer: I acknowledge that the selection of my examples here may be some first-class cherry-picking. In my defence, it is a tricky position to validate empirically. I’d be curious to hear from anyone that can point out more recent innovations in the field that come remotely close to rivalling the impact of the ones mentioned. The developments that came the closest for me were: Pre-registrations, boosts, personality-tailored interventions, mobile tracking tools in East Asia, Living Labs, NLP-driven knowledge systems, the new science of emotion, the early work on nudging organisations to shape better UK citizen behaviour and the organisational setup and structure of behavioural teams.

What is holding Vertical Progress Back?

Like many others, I’ve been surfing the early innovation waves created by the field’s founders. For the past seven years, I’ve been tinkering here and there, generating buy-in and building capabilities and competencies in younger practitioners. Yet in recent years, I’ve become slightly sceptical. Unsure that the propulsions created by those early innovations, on-going refinements, incremental optimisations, as well as the wide-scale horizontal adoption, are enough to keep the field expanding outward to reach its potential over the long run.

My sense is that a substantial set of new innovation waves are needed for applied behavioural science to continue thriving. So, I decided to investigate — to understand where the challenges are and explore potential ways to solve them. In doing so, I collated a set of technical and ethical limitations that are getting in the way of vertical progress. What I’ve found is that almost all of these limitations are well studied and understood, and many of the proposed solutions have been around for quite some time.

So, why hasn’t there been significant innovation?

As I began to step back and look at these limitations together, I started to see the broad system of related moving parts, and the answer to this question became more apparent. The issue isn’t in overcoming each of the particular limitations directly. Rather, it is how the suggested resolutions influence and constrain one another. That is to say that in certain cases the suggested paths put forward, to, for example, overcome technical limitations, were good ones, but because they don’t operate in isolation, they are heavily constrained by other limitations, especially those of a more ethical nature. The further we move down the technical progress paths, the heavier the ethical headwinds will be, making progress slow, effortful and costly, if possible at all.

To explain how I reached the position mentioned above, I’ll unpack each of the technical and ethical limitations identified and discuss the current suggested solutions. I’ll then step back and discuss how these suggested solutions come into conflict with one another, restricting vertical progress and creating a local maximum in the process. I’ll also explore the implications of reaching a local maximum (there’s an interesting set of costs and benefits), and finally, I’ll share a set of potential paths forward worth betting on, that may get around some of the sticky points discussed. Exploring these paths may be costly in the short-run, yet prove to be fruitful over the long run.

The Technical Limitations

1) Replicability

Firstly, and perhaps the most familiar limitation to those working in the behavioural space, is the barrage of research findings that have failed to replicate in recent years. This crisis in reproducibility emerged because of a number of factors, but the primary culprits were small sample sizes, outdated protocol structures and questionable research practices (p-hacking/cherry-picking, etc.). The incentive setup and social pressures created by the structure of the academic system didn’t help either — novel, counter-intuitive and sexy findings hold more currency than trivial, ‘nothing-new-here’ sorts of studies that may have more rigour to them.

Issues with replicability have important implications for behavioural practitioners, as the evidence-based nature of their diagnoses and interventions is one of the cornerstones that make the whole effort valuable. This value gets diminished if the evidence underlying the initial assumptions is shaky. It is easy to see how a faulty foundation here does more harm than good. Practitioners may be discouraged from using behavioural science literature out of fear that it may fail to replicate in the future, discrediting any work that they build on top of it. Even if the academic research is just a jumping-off point, the time, effort and costs saved by starting with a narrow, more concise search space can be hugely beneficial.

Fortunately, there have been some great initiatives in recent times, lead by rockstar researchers like Brian Nosek, Sanjay Srivastava, John List, Simine Vazire and many others. Initiatives like Open Science and the introduction of experimentation protocol features like pre-registrations are good examples of this. In addition, there have been efforts to create better incentive structures for replication studies and fieldwork with larger samples. Collaborations with government and private sector institutions have also helped provide researchers with access to the numbers needed to reach the required statistical power. A nice example of this is the ongoing collaboration between Harvard Business School and Commonwealth Bank.

2) Unknown Boundary Conditions

Behavioural-informed interventions have proven to be more or less effective, given the presence or absence of certain conditions. Therefore, gaining a better understanding of the context in which an intervention achieves a particular behavioural outcome is important for practitioners. A good example of this is the famous energy conservation intervention popularised by Opower. Using previous research conducted by Cialdini and others, they found that showing citizens how their electricity consumptions compared to that of their neighbours (the social benchmark), lead to a marked reduction in energy usage. What is often not discussed about this study is that there was also a boomerang effect, participants who were doing better than the social benchmark actually regressed, slacking off on their energy savings after seeing the social benchmark. With hindsight, this makes sense: social benchmarks are performance agnostic. The mechanism drives regression to the norm, not energy conservation, per se. Understanding that the social benchmarking intervention is effective, but may have adverse effects for those who are ‘better than the benchmark’ is an important boundary condition, which could help practitioners to spot Boomerangs and ‘Big Mistakes’ and to mitigate their effects.

So, we know that these boundary conditions are there, and we know that they are important. The problem is that the characteristics aren’t well understood for the majority of existing interventions, even in the contexts where they are commonly used. In saying this, there has been much more discussion of boundary conditions lately. These discussions, in combination with independent variables being studied over a variety of contexts, more field studies, bigger samples allowing for sub-group analysis and more meta-analyses, should lead to exciting developments in the years to come.

3) Combinatory Effects

In an ideal situation, practitioners would set up isolated experiments to test the effects of different interventions and how those compare to the effects of interventions when they are combined. This is rare. In practice, it is commonly the case that a combination of interventions will be tested all at once. The problem is that although there may be strong evidence for particular interventions (e.g. leveraging temporal landmarks or setting implementation intentions), the combinatory effects of two or more interventions are often not as well understood (e.g. a programme that uses temporal landmarks in combination with implementation prompts to get people exercising). This isn’t often discussed as a concern because intuitively it seems to make sense that piling interventions on top each other increases the chances of solving the behavioural problem. This is a dangerous assumption, as counter-intuitive crowding out effects may exist. More might be better, but it also may not be.

Studying the combinatory effects of interventions isn’t a common research area. Perhaps this is because the underlying psychological mechanisms are more interesting to academic behavioural scientists, and this sort of research becomes much harder to do when you have more moving parts. However, with intervention combination being less the exception, and more the rule in practice these days, it is a valuable area for more rigorous investigation. Gabriele Oettingen’s WOOP framework is a useful paragon for researchers and practitioners to look to if we want to make progress here.

4) Cultural Variation

This one is close to home for me. I work across Africa, yet lean on research that was conducted in either the United Kingdom, Europe or the United States. The problem with this sort of extrapolation comes in two forms.

Firstly, it may be the case that the direction and magnitude of an intervention’s effects were the results of local norms that are unique to narrowly sampled WEIRD contexts, rather than some universal mechanism that is broadly generalisable. Even if the researchers have a large sample and robust research protocol, over-generalisation is possible.

Secondly, there may be cultural beliefs, norms and expectations that crowd out the effects of particular interventions. For example, there is a lot of research on the effectiveness of lotteries as an incentive structure for increasing uptake and initial usage of services. However, in Accra, Ghana, strong religious-cultural norms and pyramid scheme-type scams have lead lottery-type incentive structures to be viewed with distaste. Organisations and institutions that use lottery-style incentives are therefore likely to suffer backfire effects, and worse, suffer damaging erosions of their reputation, credibility and trustworthiness.

Whether it’s over-generalisation or cultural crowding-out effects, the consequences are the same: evidence-informed interventions that fail to have the expected impact on a particular behavioural problem.

There is cause for optimism though. With greater access to the internet and the growth of virtual research labs, cross-cultural research studies are more common these days. Global partnerships between research institutions and organisation are also starting to come online. Research consultancies like Busara and Ideas42 are doing some really great work in this space. My sense, however, is that to unlock real value here, behavioural researcher and practitioner competencies have to be built within local organisations. This will enable teams to conduct research and run experiments locally. Why is this important? Because it enables an important shift in attitudes towards academic research findings, as opposed to relying on the conclusions made by researchers halfway across the world, the findings can be treated as informed hypotheses, that need to be validated locally. This creates an appetite for local experimentation, and more effective, culturally-specific behavioural outcomes emerge as a result.

5) With-in and Between-subject Idiosyncrasies

This is where things start getting really tricky.

For unknown reasons, individuals within a seemingly similar context will be more or less responsive to different interventions. To make things more complicated, individuals responsiveness to a particular intervention can change over time or as a result of changing contextual features. For example, a set of individuals may be responsive to a social benching intervention, while another segment of the same population may not respond at all, yet be highly responsive to scarcity-orientated messaging or communications that take advantage of authority bias.

The suggested paths forward here all seem to orientate around getting access to individual-level data, to better understand the target population heterogeneity, and to use that data in combination with machine learning in order to deploy sharply personalised interventions that are uniquely tailored to each individual. These tailored interventions are deployed in an automated manner using individual-level psychological, cognitive and situational profiles, which dynamically evolve over time. Yeung’s ‘Hypernudges’, Thaler’s ‘Choice Engines’, Mills’ thinking around ‘Personalised Nudging’ and Fogg’s ‘Persuasion Profiling’ ideas are all good examples of interesting work being done in this area.

Personally, I find the whole space fascinating. At the same time, I’m sceptical that real progress can be made here without resolving some of the other core technical and ethical issues that exist. I’m also not very optimistic that those issues can be resolved, at least not quickly. Progress in the personalisation space, or the lack thereof, will play a significant role in the shape that the field starts taking over the next few years. I also wouldn’t be surprised if we see rapid progress here in certain countries (especially those with high-trust institutions and collectivist cultures), while at the same time very limited progress in others. Whatever the outcome, it will be exciting to watch.

A final thought on the problem of individual difference: An alternative solution to heterogeneity is customisation. With this approach, impersonal interventions are deployed (e.g. defaults) and individuals can then adapt particular settings to suit their personal preferences (e.g. the Apple iPhone’s Screen Time feature). This gets around some of the ethical concerns that arise with a purely personalised approach, yet may be limited to a narrow set of contexts. Individuals may also simply decide to go with the status quo (or do so unintentionally), even if there are more suitable options. This, in my opinion, is an interesting and underexplored space for growth.

6) Channel & Data Access

The effectiveness of interventions can be constrained by the channels and data available to the practitioner. Public sector practitioners are largely limited to interventions in the public spaces, or via relatively inflexible communication channels. They also often need to rely on proximate data measures, with assumed links to target behaviours, rather than measures that relate to the target behaviours directly. In the developed world, limited access is a headache that needs to be resolved, while in many developing nations, access to good behavioural data is such a distant pipedream that it feels weird to complain about it.

Private sector organisations, on the other hand, can access behavioural data via the product, service and communication touchpoints that their employees and customers engage with. There is a much greater opportunity here, especially given the intimate nature of particular technology products (e.g. Apple’s watch, Google’s search tool or browser, social media platforms like Facebook, Twitter or Instagram).

If we think of data and channel access as instrumental to positive behaviour change, it is intriguing that there isn’t more interest in influencing positive behavioural outcomes via private sector vehicles. There is just vastly more potential there, in comparison to influencing citizen behaviour using the largely static data that is available via public sector channels. There are exceptions to this, China and Singapore are the obvious ones, but these public sector institutions exist within very different political and cultural environments to most other governments, especially those in the West.

Generally speaking then, public sector institutions may not have the data and channel access to solve certain important behavioural challenges. However, a promising avenue for growth here may be to solve these problems indirectly. This can be done by focusing on audits, interventions and policy changes that shape the nature of the way companies’ interface and engage with citizens. Sugar taxes, GDPR, sludge audits and dark pattern penalties are all interesting examples of what is happening in this space.

7) Implementation & Scaling Issues

Even if a robust behavioural research finding is trialled and proven to be effective within a particular local context, it can still fail when implemented at scale. The reasons for failure vary, sometimes the service landscape changes, sometimes key stakeholder buy-in diminishes. Other times there are reprioritisations on the product, programme or policy roadmap. Failure to implement at scale can also be due to resource constraints, inadequate infrastructure or ROI miscalculations. There may also just be emergent effects that exist at scale that simply weren’t possible to foresee during the field trials.

These factors are all difficult to control. Practitioners can lower the downside risks by identifying, discussing, accepting and managing all the potential failure points. There are also certain aspects of implementation that can be fully mitigated upfront. For one, get key stakeholders to pre-commitment to scaling, given certain outcomes are achieved upon completion of the field trial. Mapping out the scaling scenario upfront and utilising planning tools such as backcasting and premortems help too.

8) Unforeseeable Second-Order Effects

Interventions don’t influence behaviour in a vacuum. There can be important long term effects, immediate side effects and wider unintended externalities. These effects are tricky to foresee, difficult to map accurately, incredibly hard to model and nearly impossible to measure perfectly. As Applied Behavioural Science starts moving away from the low hanging fruit and attempts to solve more complex (or wicked) problems, that involve dynamic behaviour (change over time), and non-linear outcomes, the field is going to need to start grappling with these components.

Here, closer collaboration with system thinkers, complexity theorists and network scientists is a useful starting point. I’m not even sure that’s enough though, especially in the case of centralised teams (large organisations, public institutions) trying to solve dynamic behavioural problems. These tend to operate in complex local contexts, with important yet hard-to-measure knock-on effects on the unique individuals, their communities and the environment around them. This is one of the technical limitations that seems incredibly difficult to overcome, especially at scale. I expect contextually intimate and localised intervention deployments may bear more fruit here, but how you do that sort of thing in practice is tricky.


The Ethical Limitations

1) Preference Ambiguity

In some situations, it may seem relatively easy to infer what an individual’s preferences are. There are many scenarios, however, where this is not so clear. In these cases, the practitioner needs to go about gathering information about individuals’ preferences. This moves practitioners into the messy realm of epistemology. What kind of information can be said to provide a reliable indication of an individual’s preference? Are long-term preferences really more important than those that prioritise immediate gratification? Is relying on experts enough? Can we just ask people or observe their behaviour and infer preferences from that?

It’s a complex problem, but for the sake of discussion, let’s just say that it is solvable. Even if the practitioners know what kind of information would be necessary to understand an individual’s preferences, actually collecting that information is a whole other challenge. Especially if preferences are expected to 1) be heterogeneous within the target population, 2) be contextually dependent and 3) change over time. Additionally, there are also situations where the preferences of individuals are known, or the information required for them to be known can be accessed, yet individuals explicitly do not want third-parties to make decisions for them (e.g. medical treatments and end-of-life decisions).

One final issue with preferences is intention attribution. Even if practitioners have a clear understanding of an individual’s goals or values (high-level preferences), they still need to understand if the target behaviour of interest is something that the individual supports. The problem here is that it is possible that an intention to perform a particular behaviour can be misattributed to an individual because the behaviour is seen as instrumental in achieving a particular personal or professional goal. Matt Wallaert has a neat example of this in his ‘Start at the End’ book: Say it is known that an individual wants to preserve their good health, yet is strongly opposed to getting vaccines, from the practitioners perspective, going to get a vaccine is a behaviour that is instrumental in achieving good health outcomes, yet they would perhaps be wrong to intervene here, given the individual’s opposition to the specific activity. There is an important misalignment between the high-level preference (health) and the preferred behaviour (not getting a vaccine) that practitioners need to appreciate.

It’s a tricky yet important problem set, and the paths to resolution aren’t obvious at all.

2) Lack of Transparency

There is good evidence that behaviourally-informed interventions are effective, even when the target populations are made aware of the interventions. In saying this, there are still certain kinds of interventions, where disclosure is likely to have an effect. This creates a problem. Even if the target population’s preferences are well understood and the target behaviour supported, a lack of transparency can be seen as undermining an individual’s autonomy, and therefore worth ethical consideration.

This is further complicated by the idea that there is ‘no neutral choice architecture’. It isn’t like a choice environment is playing a passive role in an individual’s decision-making processes, and all of a sudden, become active and influential once a practitioner intervenes. No, the environment is always influencing, shifting and shaping our behaviour. As a simple example of this, you don’t have to look much further than the order of commonly used information. There is a lot of evidence that ordering effects have a strong influence on our behaviour (top-ranked search results, survey answers and food items get selected more often). There is no neutral way to order these items. Ordering is a trivial case, but the principle applies very broadly. Influence is always at work. We are creatures of context, not, creatures of ‘only psychologically informed and intentionally designed’ context. To accept this argument is to accept that the ethical discussion is more about the intentionality of the practitioners and the magnitude of influence, rather than whether a particular context or choice environment is inherently ethical or unethical.

Another consideration, when thinking about transparency, is mechanism disclosure. It is one thing to be transparent about ‘what’ the intervention is, but what about ‘how’ the intervention works? Take, for example, a recycling campaign that includes information about dynamic norms (an upward trend in the amount of recycling being collected in your neighbourhood). The intervention may be disclosed here, but what about how dynamic norms operate psychologically? Should this be disclosed? What effect might that have on the effectiveness of the intervention? Sometimes the mechanisms aren’t well understood, what then? What about when interventions are used in combination with one another? And at the risk of being captured by infinite regress, what about the psychological effects of the mechanism disclosure, should these be disclosed? Even if an intervention’s mechanisms should be disclosed, this is often very difficult to achieve in practice, let alone getting individuals to actually engage with the disclosure.

A final issue relating to transparency is disclosure around experimentation. These days, A/B tests are commonplace throughout the corporate world, and RCT’s are quickly becoming a familiar instrument in the policymaker’s toolkit. However, there are still important ethical implications to consider, as Jon Jachimowicz and others have discussed in detail. There are also citizen sensitivities that need to be accommodated, as Facebook learned in 2012.

Resolving these issues isn’t obvious, but their implications aren’t trivial either. We can’t simply ignore them because of their complexity — another hard puzzle for the smartest amongst us to try to solve.

3) Pseudo-Reversibility

The ability to reverse the behavioural effects of an intervention can be important. Although reversibility may seem like a strength of the softer, more liberally paternalistic nudging approach so commonly used in Applied Behavioural Science, especially in comparison to harder policy instruments (or customers incentives and employees penalties in organisations), this may not always be the case. The concern is that because some behaviourally-informed interventions operate outside of conscious awareness, even when optionality is available, the ability to reverse a choice may not always be possible, practically speaking. This is what Johnson, Goldstein and more recently Reijula and Hertwig, refer to as a gap between nominal reversibility and actual reversibility. It is difficult to opt-out of something you aren’t consciously aware you are opting into.

As a result, and paradoxically perhaps, the more explicit, harder policies, penalties, restrictions and punishments may in some interesting sense provide more reversibility, as they can be more easily identified, discussed and challenged. This sort of citizen activism is difficult to do if you don’t know if an intervention is present or how it operates.

Again the lack of choice architecture neutrality is an important principle to appreciate here.

4) Unknown Externalities

Let’s imagine the primary effects of a behaviourally-informed intervention are known. Additionally, let’s imagine these effects align with the target population’s preferences and that the target behaviour is supported. Let us also imagine the intervention is transparent and it’s operating mechanisms are disclosed. In this scenario, deployment of an intervention may still be ethically problematic due to the wider rippling effects. These knock-on effects could be intended or unintended, known or unknown. The effects could be personal (for example, getting people to exercise may have pendulum effects or moral licencing consequences in the short run, and wider physical health outcomes over the long run). The effects could also be social (for example, the primary effects of the intervention lead to broader effects on social group cohesiveness) and the effects could be environmental (the primary effects lead to changes in the environment surrounding the individual).

A narrow focus on the target behaviour and related outcome measures is a feature of the behavioural approach (get uncomfortably specific as Kristen Berman of Irrational Labs likes to say). However, without appreciating the wider ripples created by the intervention, the maxim can end up being a bug.

Widening the boundaries to the system within which the intervention operates seems to be an important and ethically necessary exercise, but one that isn’t often practiced or discussed amongst those in the field. Of course, where the system boundaries should be drawn is complicated, and in some sense, an impossible question to answer perfectly. Just moving one order out seems like a good starting point.

5) Data Privacy and Tracking Concerns

As discussed previously, given the limitations to assuming an individuals preferences and their responsiveness to particular interventions, gathering behavioural, situational and psychographic data can be incredibly helpful. Personalised option-sets, tailored interventions and dynamic updating are all much more difficult (if not impossible) tasks without detailed and reliable individual-level data. The concern is that collecting and utilising this data brings up a set of ethical issues, that seem to magnify disproportionately with every additional step practitioners make down this path. The core ethical considerations here are around transparency of data collection (does the individual know their data is being collected, and can they opt out?), transparency of data uses (does the individual know how their data is being used to influence their behaviour and can they opt out?) and accountability for data privacy (who is responsible for the individual’s data privacy and what are the consequences if this data is leaked?).

Even if these ethical issues are resolved, there is also an important psychological dimension to individual-level data collection, usage and privacy. Individuals seem more comfortable and open to tracking and data-driven interventions when the activities are perceived as norms and are seen as coming from benevolent and highly trustworthy institutions. The problem here is that norms, benevolence and trustworthiness are all dynamic and unstable psychological percepts. They move up and down with time. Innovating along a path that is so susceptible to social and psychological retort is a risky endeavour, especially as the sensitivity seems to increase with every step forward.

For example, compare the Obama election campaigns (shaped by VAN) to those deployed by Brexit and Trump (shaped by Cambridge Analytica). Arguably, the data collection capabilities and micro-targeting tools utilised in 2016/2017 were just the next generation of the Obama campaign toolkit, yet the societal reaction was an order of magnitude worse. Of course, some of this reaction was driven by CA’s malicious manner of data collection, but individual-level psychographic profiling and finely tuned micro-targeted messaging (often referred to as a psychological warfare tool) were also serious concerns at the time, yet slightly blunter versions of these tools are used as consulting case studies.

The general sentiment towards personal tracking, data-collection, data-driven micro-targeting and personalised interventions seems to be slightly less apprehensive these days. However, it wouldn’t be wise to forget that the landscape is still a minefield. That is to say; it is an extremely high-risk environment with catastrophe just waiting to happen again. How do practitioners continue to move down the path of ever more finely tailored interventions without risking a debilitating CA-type event? Especially knowing that the societal reaction could significantly damage the long-run reputation of the field, and put a stop to progress entirely. Perhaps, I’m overly precautionary in my thinking here. It is also likely that risk redundancies will emerge from familiarity and normalisation, similar to what seems to be happening with the social credit systems that are being so eagerly adopted in the east.

6) Broader Psychological Implications

The position presented by Applied Behavioural Science can often be perceived as leaning more towards the paternal than the liberal. As the field continues to expand horizontally, the implications of this belief set shouldn’t be overlooked, especially given the research on the psychological consequences of learned helplessness and a low sense of agency (e.g. stress, passivity, more frequent depressive positions, etc.). Even if ‘the world is hard’, Applied Behavioural Science needs to be careful not to lobby for ideologies that lead to a self-fulfilled, prophetic, over-reliance on third-parties to get through life.

It is important to acknowledge the cognitive distortions that plague our mind and the motivation mishaps that lead to self-control failures. There are also, however, downsides to pushing narratives that may dispel a sense of self-determination, self-reliance and self-confidence. The balancing act requires both an acceptance of the mind’s palaeolithic predispositions while still instilling a sense of agency. We need to be telling better stories of how these two realities can operate together. The world might be hard, but that’s okay because there’s a Macgyver in you (or perhaps a ‘Matt Damon from The Martian’ for younger folk).


The Bigger Picture

The collections of technical and ethical limitations are difficult enough to overcome in isolation. My sense, however, is that the real challenge here is the constraints that each of the limitations put on one another. Overcoming the difficulties associated with population heterogeneity, preference ambiguity, longitudinal effects and second-order externalities are all limited by individual-level data collection and usage concerns. Moving forward with combinatorial approaches to behaviour change is limited by the need for intervention disclosure, mechanism transparency and narrow channel access. Culturally specific interventions that use experimentation at a local level dial up the importance of understanding the ethical surrounding field experimentation. Attempting to make significant progress down any one path (e.g. personalisation), results in tension on the other part of the system (e.g. data usage concerns).

Image for post
Image for post

What are the implications of this?

The best way I have found to make sense of the position applied behavioural is moving towards, is by leaning on a concept often used in mathematics and evolutionary biology — namely, a local maximum**.

**I will digress briefly to unpack what a local maximum is. This should assist with understanding the various conclusions I discuss and the suggested paths I put forward. The concept isn’t necessary to grasp in all it’s detail though, so feel free to skip to the next segment if you aren’t interested or are already familiar with the idea.


Local Maximums — A Brief Digression

To help make sense of the evolutionary processes, biologists often use a visualisation tool called a fitness landscape. The tool demonstrates the relationship between genotypes (an organisms genetic makeup) and reproductive success, as a way to understand the paths of evolutionary optimisation within a particular search space.

What does that mean?

In short, when an organism, let’s say Darwin’s famous finches, for example, reproduce, there are inevitably new gene combinations and slight mutations that emerge. Biologically, the baby finches are subtly different from their parents. If the baby finches manage to survive childhood and go on to reproduce, this slight shift will likely happen again. And again. And again…

This process of reproduction and random mutation is how evolution moves an organism around the genetic search space. If a random mutation leads to genetic combinations that increase the fitness of the next generation, they are more likely to go on and reproduce, while mutations that lower an organism’s fitness function are less likely to.

Back to the fitness landscape

You can imagine the evolutionary process randomly moving around a three-dimensional landscape. The horizontal plane is a collection of possible genotypes, a two-dimensional search space that evolution moves through. At every point in the search space, there will be a vertical fitness level, which you can think of as how successful a given genotype is at reproducing. So evolution slowly (the sort of slow we cannot even comprehend) moves through the search space at random, and when it hits on a mutation that improves fitness (reproductive success), it moves upward, gradually climbing a hill. Through this process, evolution iterates on what is working (throwing away what is not), gradually optimising an organism for a particular environment. In this way, evolution works a bit like a blind man with a stick walking up a hill. The blind man uses his cane to feel the space in front him, and if the ground is slightly higher, we move toward it, and so upward. He then repeats the process, gradually moving slowly to the top of a hill.

This is the core operating algorithm of the evolutionary machine. You and I are the results of this elegant and beautifully simple optimisation algorithm. We wouldn’t be here without this. Neither would any living organism.

In saying that, evolution’s core strategy isn’t perfect. Just like the blind man and his stick, evolution cannot see. All it is able to do is feel around for the ground that is slightly higher than the turf it is currently on. Like the blind man, blindness becomes a bit of an issue when an organism gets to the top of the hill. At this point, there is no higher ground. No way to make any further progress upward. The blind man has reached the peak of his hill.

At this stage, our blind man metaphor’ starts to lose its grip,

Our blind man’s purpose isn’t perpetual hill climbing so he can grab some cold water, eat the sandwich he packed in his bag, take a deep breathe and start making his way down the hill. No harm is done. Evolution, on the other hand, is an optimisation algorithm with the core function of getting to the highest point possible. This is fine if the process just so happens to climb the highest hill in the fitness landscape (the global maximum). The problem is that there are many hills and mountains and only one Mount Everest. So the chances of that happening can be quite low, depending on the landscape. It is perhaps more likely that the algorithm has reached one of the smaller peaks. A local maximum, rather than the global one.

A downside of narrow optimisation functions

The problem with evolution reaching a local peak is that it doesn’t have a mechanism for descending into the valley to try other hills that may have higher peaks. All it can do is use its cane to feel around for higher ground in the search space around it. And because the spaces are either flat or lower than its current position, it gets stuck. Evolution, unfortunately, just doesn’t have the ability to lower an organism’s fitness level in order to head down into the valley in search of higher peaks. It just keeps perpetually searching in the space around it, or at least until the landscape changes shape. But, until that happens, the blind man will climb the hill many times.


So Are We reaching a Local Maximum?

My sense is that Applied Behavioural Science is closing in on the peak of a local maximum. There are a variety of factors that give me this sense, but the primary one is the diminishing rate of new innovation, driven largely, in my view, by a set of inter-related technical and ethical limitations that are constraining one another.

There are several implications here worth briefly discussing:

  1. Reaching a local maximum isn’t inherently a bad thing. Over the next few years, we will continue to gain clarity on how to utilise the current toolkit most effectively. For example, I think we’ll start getting a clearer sense of where centralised public sector nudging is effective and where it isn’t. Greater distillation, refinement and incremental optimisation is still progress. So is wide-scale adoption.
  2. A local maximum isn’t a global maximum, meaning there is greater potential value to still unlock on the Applied Behavioural Science landscape. I firmly believe this. Fortunately, unlike evolution’s fitness function, practitioners can override the optimisation algorithm, and head down, off the peak, and into the valleys below, in search of paths to other peaks.
  3. Not all practitioners need to venture down into the valleys, at least not all of the time. If Applied Behavioural Science is to survive over the long run, it is important that many practitioners continue to leverage what works, expand adoption and build capabilities in teams, organisations and institutions around the world. In saying this, I would encourage practitioners to start spending some of their time in the valleys, investigating potential paths to new innovations. The future of the field depends on exploring promising new avenues today.

Potential Paths Worth Exploring

I’ve identified a set of alternative innovation paths, two of which I’m starting to take very seriously. I’ll share my thinking on these paths in the second part of this series.

In the meantime, if you would like to discuss the conclusions I’ve reached in this essay, where I’m wrong (I would love to update and refine my view on this subject) or explore potential paths forward, please feel free to get in contact. Twitter or LinkedIn DM’s are probably the easiest, or just comment below.

I’ll also be speaking on the topic at the first virtual version of the Sydney Behavioural Economics & Behavioural Science Meetup, next Wednesday (13/05/20). You grab a spot to the virtual meetup here.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store