The Nagging Issues of Nudging

Habitry
Practical Motivation Science
61 min readOct 23, 2017

--

By Steven M. Ledbetter & Omar Ganai of Habitry

“The first principle is that you must not fool yourself — and you are the easiest person to fool.” — Richard Feynman

In the 12th episode of The Simpsons 4th season, the citizens of Springfield get a cash windfall when Mr. Burns paid a $3 million fine for dumping nuclear waste in the city park. The citizens have a town meeting to decide what to do with the money. Maude Flanders suggests, “finally putting out that blaze on the East Side of Town,” but that’s rejected by the mob as, “boring.” Apu suggests more police officers since he has been shot 8 times this year and as a result, has almost missed work. Chief Wiggum dismisses him as a cry baby. Finally Marge presents a well-reasoned and impassioned appeal to fix the potholes on Main St. The mob love it (after misattributing the suggestion to an older white male), and chant, “Main Street! Main Street!”

But then, appearing in the corner behind the chanting mob, a handsome (halo effect) man in a fancy suit (authority bias) cracks a joke about a mule with a spinning wheel that insults the intelligence of people in other small towns (illusory superiority); tells them he has the perfect idea for their money, but starts walking away saying, “it’s really more of a Shelbyville idea” (loss aversion). They demand the stranger tell them what the idea is (reciprocity tendency). He excitedly unveils the model of a Monorail in their town (endowment effect), and tells them he’s “built Monorails in Ogdenville, Brockway, and North Haverbrook (social proof), and by gum it’s put them on the map” (halo effect). The stranger then breaks into a call-and-response song (IKEA Effect) getting the townspeople to repeat (mere-exposure effect) a very simple word (Parkinson’s Law of Triviality) over and over again in a group chorus (bandwagon effect) — “Monorail! Monorail! MONORAIL!”

“But Main Street’s still all cracked and broken,” Marge retorts.

It was too late; the townspeople had already awarded their money to the man who had given them a sexy answer to a nonexistent problem with the potential to cause a lot of damage.

“Sorry Marge, the mob has spoken.”

On October 9th, 2017, another group of people with a million dollars in their pocket decided to whom they would award it. The Royal Swedish Academy of Sciences announced they had chosen Richard H. Thaler as the recipient of the 2017 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel.

In their press release they cited, “his empirical findings and theoretical insights have been instrumental in creating the new and rapidly expanding field of behavioural economics, which has had a profound impact on many areas of economic research and policy.” Research, absolutely. Like Kahneman, who shared the prize in 2002, it’s hard to deny the impact Richard Thaler has had on understanding the fields of cognitive psychology and economics.

But policy?

“In his applied work,” the committee goes on, “Thaler demonstrated how nudging — a term he coined — may help people exercise better self-control when saving for a pension, as well in other contexts.”

Monorails aren’t new. They were patented in 1821 and years of experimentation have revealed they have limited applications outside Disneyland, Seattle, and airport parking structures. 196 years from now, we’ll realize it’s the same story with nudges — another well-dressed stranger selling us a sexy answer to a nonexistent problem with the potential to cause a lot damage.

But the real mystery of this story is why this con keeps working.

The Con is On

It’s apocrypha, as you say, sir. For how can that be trustworthy that teaches distrust? — Herman Melville, The Confidence-Man: His Masquerade (1857)

So what the hell is a “nudge?”

Richard Thaler and Cass Sunstein define nudges in their 2008 book Nudge as, “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives” (Thaler and Sunstein, 2008; pg. 6).

So presenting options to a chooser in a way that reliably gets that chooser to pick a specific one without outright banning any options or offering the chooser money to pick that specific one.

Like a magician asking a member of the audience to “pick any card,” while subconsciously “nudging” them to choose the specific one that makes the trick work (except in magic this technique is called, “forcing”).

When is it OK to use nudges?

When they are, “aimed at overcoming the unavoidable cognitive biases and decisional inadequacies of an individual by exploiting them in such a way as to influence her decisions (in an easily reversible manner)” (Rebonato, 2014; pg. 4)

[ed note: This was harder to answer concisely in Thaler and Sunstein’s own words, so we’ve turned to the summary of their position by Riccardo Rebonato (2014). We feel justified in doing so because this definition is used frequently as a concise summary of Thaler and Sunstein’s position (Gigerenzer, 2015), and Rebonato (2014) was shared by Mr. Sunstein via Twitter]

According to Thaler and Sunstein, our brains have built in cognitive biases that prevent us from making the right decisions. They’re broken. And the easiest way to consistently correct these decisions is for a benevolent Nudger to exploit those same biases to influence you — without removing any choices — back to the correct decision you would have made if it wasn’t for those biases.

So it’s OK to exploit a cognitive bias when it’s “undoing” that cognitive bias.

And what should we nudge them to do?

“Towards choices that she herself would make if she had at her disposal unlimited time and information, and the analytic abilities of a rational decision-maker” (Rebonato, 2014; pg. 4).

According to Thaler and Sunstein, the benevolent Nudger should nudge you towards the decisions you’d make if you had all the time in the world and knew all the consequences of your choices and if you weren’t ignorant of how vulnerable you are to cognitive biases.

Like how it’s OK for a magician to exploit your subconscious and force you to pick the 3 of Clubs in a magic show because if you’d seen the whole show, you’d know that he needs the 3 of Clubs to make the trick work. And if you had the time to sit down and think about it analytically, you’d really rather just pick the 3 of Clubs and have the show go well than be an asshole and ruin everyone’s fun by picking the 9 of Diamonds.

And what is the ethical justification to nudge people’s decisions?

Thaler and Sunstein call their methodology for when and where to deploy nudges, “libertarian paternalism (Thaler & Sunstein, 2008; pg. 5) and offer an ethical justification based on the following argument:

  1. Human beings display systematic deviations from 100% rational decision making. “[O]ur basic source of information here is the emerging science of choice… research has raised serious questions about the rationality of many judgments and decisions that people make” (Thaler & Sunstein, 2008; pg. 7).
  2. These deviations prevent you from making decisions in your best interest, or even decisions better than others would make for you. “The false assumption is that almost all people, almost all of the time, make choices that are in their best interest or at the very least are better than the choices that would be made by someone else” (Thaler & Sunstein, 2008; pg. 9).
  3. Because of these “hardwired” biases, you cannot “easily” learn how to make rational decisions. “Our ability to de-bias people is quite limited” (Sunstein as reported by Bond, 2009; pg 1191).
  4. And your choices are going to be influenced by your environment anyway. “The first misconception is that it is possible to avoid influencing people’s choices” (Thaler & Sunstein, 2008; pg. 10).
  5. Nudging, as outlined by Thaler and Sunstein, is used only to influence you to make the decisions you would actually want to make… “as judged by their own preferences, not those of some bureaucrat” (Thaler & Sunstein, 2008; pg. 10).
  6. And in the direction of objectively improving human welfare. “By properly deploying both incentives and nudges, we can improve our ability to improve people’s lives, and help solve many of society’s major problems” (Thaler & Sunstein, 2008; pg. 8).
  7. While preserving freedom of choice, (“Since no coercion is involved, we think that [nudges] should be acceptable even to those who most embrace freedom of choice” (pg. 11).
  8. Dignity (by retaining freedom of choice per Position #7), “To the extent that good choice architecture allows people to be agents, and to express their will, it promotes dignity” (Sunstein, 2015; pg. 16).
  9. And autonomy by fostering informed choices. “Some nudges actually promote autonomy by ensuring that choices are informed” (Sunstein, 2015, pg; 16).
  10. To recap: Since you are mentally incapable (per Position #1) of making rational decisions (per Position #2), and it’s difficult to teach you how to do it (Position #3), and your choices are going to be influenced by your environment anyway (Position #4), and nudging can only be used to influence you to make the decisions you actually want to make (Position #5), and objectively improve human welfare (Position #6) while preserving your freedom of choice (Position #7), your dignity (Position #8), and your autonomy (Position #9), nudging is ethical because not nudging would unethical. “If welfare is our guide, much nudging is actually required on ethical grounds…. A failure to nudge might be ethically problematic and indeed abhorrent” (Sunstein, 2015; pg. 16).

OK, we know that was a lot. So let’s sum it up with Thaler and Sunstein’s favorite example of a justified place to nudge: cafeterias.

Let’s say you get your dream job at a big tech company like Google. These companies feed their employees in giant cafeterias and provide excellent health benefits including gyms and doctors on site. Now every day you get to go to a Google cafeteria, grab any Google food you want, and eat it with any Googler you want.

But every day, you have to walk by the Google dessert station… and you love cake. No but like, you love cake. And unfortunately after your last visit with your Google Doctor, you’re trying to lose a little weight and watch your sugar intake. But you keep having to walk by that dessert station with the frosted sheet cake.

And no matter how much you don’t want to, and no matter how much you tell yourself, “it’s a not even a good cake; it’s not worth the calories…” by the time you’ve made it through the line, you find yourself sitting down with the other Googlers, lazily stabbing a fork into a slice of frosted shame-cake.

If only there was a way help you make the “right” decisions about whether or not to eat frosted shame-cake, but that didn’t deprive you of the option to get it. Then you’d eat less frosted shame-cake, but never feel like you had lost any freedom of choice.

Google could do nothing and hope you don’t get diabetes. Or they could:

  1. Hide the cake around the corner making it harder to find.
  2. Attach a sign to the cake that says, “most Googlers don’t eat cake.”
  3. Use tracking data to inform you, “you’ve gone 12 days without cake!”
  4. Put the cake in the middle of the cafeteria so everyone is watching when you have to make the decision — definitely making it a frosted shame-cake.

These are all ways of using nudges to stack the deck against choosing cake, without taking away cake as an option. And everyone wins! You eat less cake, Google saves money on health care costs, we all make more money and no one loses any freedom! America!!

And this is why Thaler and Sunstein think we all need a good nudge.

You’re Only Fooling Yourself

Political language…is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind. — George Orwell, Politics and the English Language (1946)

Before we start pulling apart this magic trick, we want to start out by saying Richard Thaler and Cass Sunstein are good academics. Almost all the academic criticisms we have found against nudges come from Thaler and Sunstein’s twitter feeds. They are also nice people. We’ve heard many private stories about their generosity and kindness. We also totally bet they pay their taxes and help old ladies across the street. And finally, Richard Thaler’s contributions to economics are no doubt worthy of award. The con is not Professor Thaler’s experiments.

As far as the argument they present for nudging — libertarian paternalism — we agree with Professors Thaler and Sunstein that there are needs for technological and social interventions designed to objectively promote human well-being. We also agree with them that those interventions should promote autonomy and dignity. And the means of these interventions should preserve freedom of choice when appropriately balanced to the social consequences of those choices. So let’s be very clear — we are not libertarians. We are not anarchists. But we are, at our core, concerned with the practical application of Motivation Science, via the difficult conversations that come up when one is engaged with exercising power over other people’s decisions. And we think these are the hard conversations that Professors Thaler and Sunstein are trying to avoid by advocating utopian technological solutions to what are really difficult social problems.

Many authors have drawn attention to the bountiful flaws with nudging and libertarian paternalism, but we’ve chosen to focus on a few of the ones raised by Riccardo Rebonato (2014) and Gerd Gigerenzer (2015). If you want a more through academic treatment with less cursing, we recommend checking them out (but if you want a more thorough takedown with cursing, we recommend reading the Appendixes).

To begin, let’s explore the Google shame-cake example. Every nudge needs a designer and an intent: who’s nudging and in what direction are they nudging? Libertarian paternalism needs the intent to be to nudge you towards the choice you “would want to make”: don’t eat shame-cake. But how, according to Thaler and Sunstein, is it possible for a designer to know what anyone else really wants to do? According to Position #5 above, this must be the choice “as judged by [your] own preferences, not those of some bureaucrat” (Thaler & Sunstein, 2008; pg. 10). This choice — according to Thaler and Sunstein — is the choice you would make if you thought about it as a perfectly rational actor with complete knowledge of the consequences of your actions, disembodied from the context and the emotions of the moment. Why is that the choice you would prefer? Because, they say, that’s the choice that rational people make when they have complete knowledge of the consequences of your actions, disembodied from the context and the emotions of the moment.

It’s too bad that Professors Thaler and Sunstein have not had the opportunity to walk across campus and present that argument to any of the excellent philosophy Professors at the University of Chicago, or even any undergraduates enrolled in PHIL 20100/30000 “Elementary Logic.” Because those kids could have saved us all some time and pointed out that this argument is completely circular. It’s “A, therefore A.”

Let us prove it by stating the opposite case, “B, therefore B” which would be:

What you really want is the choice you make immediately, before you have any time to reflect. Why is that the choice you would prefer? Because, that’s the choice that people make immediately, before they have any time to reflect.

Cue sad trombone music.

What we’re left with — according to what’s left of Professors Thaler and Sunstein’s justification for nudging in the Google scenario — is designers nudging you towards what the designers are arbitrarily guessing you “would want,” but preserving freedom of choice. Doesn’t sound too bad, right? You can technically still choose cake, if you want, in all four of those nudging scenarios.

So go ahead…

  1. Run around the whole damn cafeteria feeling like an idiot while you desperately try to find what you need to get your frosted shame-cake fix.
  2. Reach over that sign and say, “I guess I’m a weirdo who wants cake.”
  3. Break your streak of making “good” choices.
  4. Or walk right into the middle of the cafeteria in front of your boss, your boss’s boss, your Doctor, your friends, and that cute person in accounting, and get a slice of pointless stupid sweetly flavorless frosted shame-cake.

Yeah. Go ahead. It’s totally your choice. Don’t you feel so free to make that choice? So autonomous? So free from other people’s manipulation or control?

Fuck no, of course you don’t. Because you’re not a goddamn idiot. And let’s be real about what nudges really are — nudges are nagging. Even the world “nudge” comes from the Yiddish word “noodge” which means nag! You know where we learned that? It’s the first fucking footnote in Nudge.

And sadly we’re going to have to set aside the obvious questions like:

  • Are choices still “free” when the playing field for making them is tilted? We guess parents who are poor are technically free to choose to move to better school districts, but would we say that they have “freedom of choice?”
  • How can we guarantee the designers doing the nudges are free from the cognitive bias boogey men at the heart of the justification for nudging? If debiasing the public is so hard, then wouldn’t de-biasing the designers be just as hard? Then who nudges the Nudgers?
  • If a little nudge saves you a few calories and Google a little money… wouldn’t a big nudge like an employee badge that flashed “CAKE EATER” after you ate cake work even better while still not technically taking away the choice to eat it?
  • If these nudges are so effective, what consent do the Googlers have in their application and could we really trust that it was consent since Google clearly has a vested interest and the power to fire anyone who raises questions or doesn’t “consent?”

We’ll be real. We have lots of problems with the arguments that Thaler and Sunstein have put forth for nudging in the last 9 years. So many that listing them out in this essay would have bored you to tears (but don’t worry, we’ve debunked as many as we could as snarkily as we could in Appendixes). Instead we want to focus on the thing that really scares us about nudging:

Why we do human beings keep falling for people selling us easy answers to problems we know are hard?

Professors Thaler and Sunstein’s argument for nudging is nothing new. It’s a new act in the millennia old saga of arguing that some people are not fit to make decisions for themselves and that the only way to help them is to take away their autonomy to make those decisions. It’s Plato’s argument to replace democracy with Philosopher Kings dressed up with some new Orwellian doublespeak. And we’re not content to blame the Monorail on the con artist in the fancy suit and call it a day. Because we don’t think Thaler and Sunstein are the con artists. We think they’re the well-meaning citizens of Springfield who invited the well-dressed Monorail salesmen to the town meeting and then got conned along with us.

Because we believe good design will always be hard. We believe the goal of designing contexts should be to help people make informed decisions. We believe there are always trade offs between personal freedoms and social outcomes. And we believe in talking about power and how to use it responsibly. We just wish Professors Thaler and Sunstein would have the courage to make that argument as a justification for design choices instead of pretending that nudges are immune from the moral consequences of using power to control context.

So let’s look at how this whole story unfolded. At the way that Thaler and Sunstein learned their assumptions, then wound up fanning common anxieties with straw man arguments obscured with political language.

The Real Story

If we are uncritical we shall always find what we want: we shall look for, and find, confirmations, and we shall look away from, and not see, whatever might be dangerous to our pet theories. — Karl Popper, The Poverty of Historicism (1957)

This is the story of a con that started long before Professors Thaler and Sunstein. It’s a story with origins in a universal human experience that we call, “the Conflict of the Two Selves.” This is the fight you feel like you are having with yourself when you are torn between “what you want” — like frosted shame-cake — and what you think you “should want” — like whatever Dwayne Johnson or Kate Hudson eats. And people have been arguing over how to solve that long before Dwayne Johnson was “The Rock.”

We’ll show you how even though we all know The Conflict cannot be solved, it drives us to look for Straw Men — fictions that tell us how we’re “supposed” to think. These fictions about how people “should” make decisions are reinforced when science is misinterpreted into a worldview that paints people as hopelessly broken. And this worldview emboldens policy makers and designers to think about making oversimplified technology products — the app, the law, the nudge — that trick people into behaving, instead of using their power and knowledge to develop design solutions that might actually facilitate the motivation to change.

We’ll show you how even smart, well-meaning people like Professors Thaler and Sunstein were fooled by a new Straw Man into thinking people’s brains are broken. And that they could help people with a new technology — how to control people’s choices by controlling the context in which they make them. This Straw Man tricked Thaler and Sunstein into believing that it is possible to know what choices people “really want” to make, and that using this technology wasn’t really controlling people, it was actually setting them free.

Libertarian paternalism, as the authors and the name suggest, tries to pull the same magic trick that every powerful person since Gilgamesh and Plato has tried in one way or another — getting us to believe that “other people” don’t deserve autonomy because they just end up making the “wrong” decisions. And that technology that tricks people into making the “right” choices isn’t really taking away those people’s choices; it’s helping them make the ones that they would “really” want to make.

Thaler and Sunstein are good men who have fallen for the oldest con in policy design: They’ve talked themselves into believing nudges have no downsides. This is misleading at best, and dangerous at worst. And if you believe there’s any design or policy decisions without moral tradeoffs, we have a Monorail we’d like to sell you.

But if you are interested in learning how to hopefully avoid getting conned again, let’s look at the story of why this keeps happening.

The Conflict of the Two Selves: The Origins of Nudging

Reason in man is like God in the World. — Thomas Aquinas, Opuscule II (1294)

One of the fundamental experiences of adulting is knowing that we should do something, but being unable to make ourselves do it. Eat better. Get that colonoscopy. Save for retirement. If you stop and think about it, this is a really weird problem. How can you know what you should do… but still not do it?

You can’t say that about a computer. An automaton can’t “know what to do and still not do it.” Robots just do things. There is no conflict. In humans however, there is clearly conflict between “want” and “should.” Holding a credit card in our hand as we decide between buying something or paying down debt. Lying awake thinking about mounting “to do lists.” Mental debates the next morning between sleeping in and going to gym. But what is actually in conflict? And if we can “want” one thing and “want to want” another thing, which thing do we “really want?” If we resolve this conflict, will it mean we have become automatons or will it mean we have become fully realized humans finally able to unlock our true potential?

Obviously, these questions are old. They’re the stuff of Homer’s epics and Blade Runner. And human beings have been writing about this conflict ever since we figured out how to write stuff down. It takes many forms, but the conflict of “what I want to do” versus “what I think I should do” in literature — and in science — usually takes the form of a metaphor of two systems in conflict. Or two characters in conflict.

In a 5,000 year old Akkadian poem that’s the oldest written story we’ve discovered called The Epic of Gilgamesh, Gilgamesh the king debates Enkidu the wild animal-man about how to act in the world. In 360 BCE, Plato writes, “of the two souls of man, one governed by reason or the spirit and the other by the affections of the flesh” both directing our behavior in the world. Saints Augustine and Thomas Aquinas spend the Middle Ages writing about the The Holy versus the Corporeal battling for our souls. And in 1977, George Lucas writes about the Jedi and the Sith battling for the destiny of a young Luke Skywalker. The characters and symbols of virtues and sins shift, but the dance keeps playing out.

This is “The Conflict of the Two Selves.” The battle between what we feel we want, and what we think we should want. Philosophers have long talked about it as a “mind-body problem,” the idea that we have both a “soul” and a “body.” Or a “mind” and a “brain.” And the march of science has not resolved this philosophical puzzle. We still use the “Two Selves” metaphor for talking about our conflicting desires. Did your “reptile brain” make you lose your temper in traffic against the better intentions of your “evolved brain?” New words. New symbols. Same conflict of “I want” and “I should,” battling to become what we think of as our True Self.

Pre-Enlightenment, this conflict in the Western heart was cosmic. Little devils and angels on shoulders. The Holy Spirit against the sins of the flesh. This is the battle of ‘“want” versus “should” that Augustine of Hippo recalled in 397 when he recounted in Confessions, “as a youth I prayed, ‘Give me chastity and continence, but not right now.’”

Look Lord, I know really should eat a salad… I just really want this frosted shame-cake right now.

Post-Enlightenment, the battleground changes but the conflict doesn’t. After reading a bunch of Plato stuffed away in Muslim libraries, some smart Europeans get reacquainted with logic and math. René Descarte declares in 1637 that he exists because he can reason his way into it, not because God did it, and that idea catches on with a few European dudes. They invent a new form of epistemology called science which moves the battleground for the True Self from the cosmos to the mind. We are now responsible for our own thoughts and the Enlightenment is not shy about telling us which thoughts are good and which ones are bad. “God” versus the “Devil”, turns into the “rational” — logic, reason, math — versus the “irrational” — feelings, desires, and all those other urges that get in the way of doing math.

Not content to keep the battleground contained to thoughts and deeds (and because science is all about observing reality), René Descartes also takes the Conflict of the Two Selves on its first step into the literal. In 1641 he claims the mind — pure, rational, and disembodied from context — resides in the pineal gland, while all emotion and irrational thoughts arise from the brain. So according to Descarte, the problem of getting cake when you should want salad is caused by the “pineal gland system” being overpowered by the bodily passions of the “brain system.”

Since the Enlightenment, to be a good person we must think and make decisions reasonably, scientifically, objectively, consistently, and completely unencumbered by context. And bad people are the ones who think irrationally, impulsively, and are governed by the sways of emotion and powerless to the appeals of reason. Of course, it’s probably no accident that all those “good qualities” were the ones that the men writing everything in the Enlightenment thought they had naturally. And all the “bad qualities” were the ones they thought were inherent to the minds of women and Africans (Malik, 2009). And in the modern West, we’ve internalized these new definitions of virtues and sins regardless of where we are in the power structure. Now we judge — others and ourselves — against this Scorecard.

Now we lie awake thinking, “am I doing what I know I should do? Am I making rational decisions? Am I being consistent? And what forces, in myself and in the world, are causing me to fail in that endeavor to consistently make the ‘right’ decisions. To be a ‘good person?’”

But that voice in our heads telling us what we “should do” is not magic. It’s not a disembodied, omniscient arbiter of what is “good” and what is “bad.” It’s just what we believe others think we should do. In 1759’s Theory of Moral Sentiments Adam Smith called that voice the “Impartial Spectator.”

When I endeavour to examine my own conduct, when I endeavour to pass sentence upon it, and either to approve or condemn it, it is evident that, in all such cases, I divide myself, as it were, into two persons; and that I, the examiner and judge, represent a different character from that other I, the person whose conduct is examined into and judged of (Smith, III, 1.6).

This “impartial spectator is the anthropomorphization of the calm and disinterested self that can be recovered with self control and self reflection” (Weinstein, 2017). But it is not supernatural, it is not God, and it is not always “right.” Not even Smith thought that in 1759. This voice in our head is just our imagination; a character we have created to represent what we think other people think of us. It is not the “right answer,” and it is not “what we really want.” This conversation that Smith is talking about is just our Two Conflicted Selves hashing it out about what the hell we should do. The ongoing, internal debate between what we “want” or what we think we “should want to want.”

And thinking that voice is always the “right answer” is the Straw Man. And no matter how many times science kills him, he just keeps coming back.

Death of The Straw Man: Psychology v. Economics

A taste is almost defined as a preference about which you do not argue — de gustibus non est disputandum. A taste about which you argue, with others or yourself, ceases ipso facto being a taste — it turns into a value. — Albert O. Hirschman, Against Parsimony (1992)

A few weeks ago, Stevo was passing through a small town in the deserts of Southern California. Stevo’s favorite thing to do when he’s in new places is watch people and think about why they’re doing what they’re doing. What decisions and motivations are driving their behavior? By coincidence (it was a small hotel), Stevo got the chance to watch one particular person make the same decision to purchase beer for breakfast multiple times. In fact, usually multiple times each breakfast.

Why did this man choose to drink beer for breakfast? Will he do it again in the future? Will he regret these decisions? Are they the decisions he, “really wants” to be making? Are they the decisions he “should” be making?

361 years after Descartes, we still use the same Scorecard. Not just to judge ourselves, but to judge others. This Scorecard, with its roots in “good v. evil” is so deeply integrated into the way we think about The Conflict of the Two Selves that it even influences the way that scientists make sense of experimental data. We conceptualize using the metaphors we know, until a new metaphor explains more and predicts things more accurately and blows up the old metaphor. Thomas Kuhn, the great philosopher of science, called these “paradigm shifts.” And there was a really big one in the 1980s that set the stage for the old idea of paternalism to become justified again.

Since Descartes, science only got more obsessed with the conflict between the irrational choices people make and the rational ones they should be making according to the Scorecard we learned in the Enlightenment. And two fields with two paradigms emerged to try and resolve this conflict — psychology and economics.

Psychology is the study of “why do individuals do stuff (especially when it makes no damn sense)?” and economics is the study of “can we predict what stuff most people will do in the future (especially with their money)?”

Contrary to popular belief, modern economists don’t assume that people behave rationally or selfishly (Binmore, 2009). Because modern economists honestly don’t care why people do stuff, they just want to know if they can predict how much stuff people will do in the future.

A psychologist wants to know why that guy at the hotel is ordering beers for breakfast. But an economist doesn’t really care about why, she just wants to know how many beers people are gonna order for breakfast in the future and how much they’ll pay for them. But clearly, these two questions are related. If you can figure out the mental processes — or “Value Math” — behind how individuals make the decision to order beers for breakfast, you can predict how many people will want beer for breakfast at a future date. But that gets messy, because even if you know the Value Math of this guy at the hotel with Stevo, it doesn’t mean you know the Value Math of the person ordering after him.

“Economists… became increasingly uncomfortable with all attempts to base economics on psychological foundations” (Binmore, 2009; pg. 8). That’s because people’s motives — the “whys” at the center of psychology — are just too fickle to rely on when making predictions about what people will or won’t do in the future. As Allen Sanderson told Stevo on day 1 of ECON 19800, “there’s no accounting for taste.”

So economists needed a justifiable way to assume that whatever mysterious and unknown Value Math people were using to make purchases was at least consistent and reliable. If they could assume the Value Math taking place in people’s brains produced reliable and consistent decisions, then they could conclude that those decisions revealed what those people actually wanted. Economists didn’t care what the actual Value Math was, they just needed to know it was consistent (Binmore, 2009). They needed a way to say, “if that guy likes beer enough to buy it now, he’ll probably like beer enough to buy it later under the same conditions.”

And their prayers were answered in Paul Samuelson’s Theory of Revealed Preferences.

[The Theory of Revealed Preferences] succeeds in accommodating the infinite variety of the human race within a single theory simply by denying itself the luxury of speculating about what’s going on in someone’s head. Instead, it pays attention only to what people do (Binmore, 2009; pg. 8).

Revealed Preferences states that “assuming certain conditions are true, the preferences of consumers can be revealed by their purchasing habits.” It’s been a cornerstone of modern economics and the Nobel Committee awarded Paul Samuelson the 1970 prize as a “thank you” for doing the fancy math that finally proved it. Economists all breathed a sigh of relief because now instead of having to say, “That guy bought a beer for breakfast because…reasons?,” they could say, “we know that guy has reasons to buy beer for breakfast because he bought it.”

“So now,” said economists everywhere, “even if we know people aren’t robots, we can pretend they are!”

However, revealed preferences suffers from a lot of assumptions and limitations. An economist using “revealed preferences” to make predictions about what people are going to do in the future has to assume 1) that people will use the same Value Math in the future, 2) that the Value Math is free from influence by the environment (“the context”), and 3) that the decision will still matter to them. These assumptions made the field really limited. “Our reward is that we end up with a theory that is hard to criticize because it has little substantive content” (Binmore, 2009; pg 20).

Revealed Preferences makes modern economics a really unsatisfying place to look for cognitive science. Because no matter how much fancy math you do, we just really want to know why people do stuff. We aren’t content to have economists pretend we’re robots, wave their hands and say, “Value Math!” Dammit, we want to know what that Value Math is! What makes that guy choose beers for breakfast?! Why do I keep eating frosted shame cake?! Tell me! Tell me how I’m supposed to think and why I can’t think that way all the time!

And in addition to the Theory of Revealed Preferences being totally unsatisfying, there’s a problem with assuming that in the same conditions, people will make the same decisions. As psychologists Daniel Kahneman and Amos Tversky pointed out in their work during the 1980s, that assumption is clearly stupid. People are not consistent and we are always in changing conditions. We’re really not consistent like robots and there’s no point in pretending we are.

Professors Daniel Kahneman and Amos Tversky made this obvious by conducting hundreds of laboratory experiments in the 1970s and 1980s. These experiments provided empirical evidence that tiny changes to the environment in which we are making decisions can create huge changes to the outcome of those decisions. And not only that, manipulating the environment reliably “changes our minds” from the decisions we made before. And we don’t even notice it. Something in the way our brains work makes us think we’re being hella consistent, even when we are not.

We are so inconsistent that Professors Kahneman and Tversky mocked economists for assuming otherwise. They said that economists had developed models that only predicted what a fictional species of purely rational, robot-like beings would do. Kahneman andTversky called these beings homo economicus, or “Econs” for short. The Econs, they argued, were a Straw Man. A relic of the Enlightenment when economists were forced by the Scorecards of Enlightenment virtues to assume that people were generally rational and self-interested, or at least acted like they should be.

This insight was so shocking to economists that in 2002, the Nobel Committee awarded Daniel Kahneman a Nobel Memorial Prize in Economics, despite the fact that he had never even taken an economics class. The paradigm of economists shifted away from Revealed Preferences to “behavioral economics.”

And with that, the Econ as Straw Man was dead.

Kahneman & Tversky had bridged the gap between economics and psychology by defeating the Straw Man of the Econs. The problem was, Kahneman & Tversky had shown we can’t rely on Revealed Preferences but they hadn’t replaced it with anything. And in doing so, they had also accidentally discovered something else. While they were doing all their really cool experiments that showed how inconsistent people’s decision-making abilities were, they learned that by manipulating the context in which people make decisions, they could “nudge” people to predictable behavior outcomes. And in most cases, the people in the studies didn’t notice. They perceived they were making perfectly natural, autonomous choices. Kahneman & Tversky had accidentally documented a powerful new tool — the nudge.

And as we all know, nature abhors a vacuums, unused power, and small towns with money to burn.

To fill that vacuum, Professors Thaler and Sunstein would end up accidentally inventing a new Straw Man. Oddly by making the same mistakes that economists made inventing the Econ. In fact, Thaler and Sunstein would make all the assumptions they’d mocked economists for making — just using different words for them — then conveniently misinterpreting Professors Kahneman and Tversky’s work with those assumptions. And we say “conveniently” because this Straw Man conveniently gave Thaler and Sunstein the worldview they needed to justify using the powerful tool they’d found.

A New Straw Man is Born: System 2 v. System 1

To confuse our own constructions and inventions with eternal laws or divine decrees is one of the most fatal delusions of men. — Isaiah Berlin, Essays in Honour of E. H. Carr (1974)

In his excellent book, Thinking, Fast and Slow, Daniel Kahneman recounts the experiments he did over his Nobel Laureate career that revealed the limitations of modern economics, and gave the world a whole new paradigm for understanding the way humans make decisions. His experiments led to a model featuring two cognitive systems in the brain: Systems 1 and 2. Each of these systems have evolved to do different things, but they interact. And each has a “role” in decision making. And because the metaphors we have for communicating complex, abstract ideas have not really changed since Gilgamesh, in Thinking, Fast and Slow Kahneman uses the metaphors of two characters with different views of the world in conflict.

System 1 is the fast, irrational one responsible for experiencing and System 2 is the slow, rational one responsible for memory. Now Kahneman is very clear, “Systems 1 and 2 are not systems in the standard sense of entities with interacting aspects or parts. And there is no one part of the brain that either of the systems would call home” (pg 68). They’re just metaphors. So we might have come a little way scientifically since 1641 and Descartes’ “Brain v. Pineal gland,” but the metaphors we use have not. And it’s really important that we keep that in mind — that these are metaphors, not real characters — as we uncover what these systems do.

This System 1 & 2 metaphor also looks like a lot like the Conflict of the Two Selves: “want” and “should.” So it’s natural to ask which System is our “True Self?” But again, things aren’t that simple. There isn’t a “want” and a “should” system. These are just systems that do different things. And our True Self is both System 1 and System 2, according to Kahneman. Because although, “[w]hen we think of ourselves, we identify with System 2” (pg. 47), things are a lot more complicated than that.

Systems 1 and 2 are both active whenever we are awake. System 1 runs automatically and System 2 is normally in a comfortable low-effort mode, in which only a fraction of its capacity is engaged. System 1 continuously generates suggestions for System 2: impressions, intuitions, intentions, and feelings. If endorsed by System 2, impressions and intuitions turn into beliefs, and impulses turn into voluntary actions. When all goes smoothly, which is most of the time, System 2 adopts the suggestions of System 1 with little or no modification. You generally believe your impressions and act on your desires, and that is fine — usually (pg. 55–56).

According to Kahneman, you are both systems. “The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps” (pg. 47). And not only are you both systems, your systems can even “want” different things. According to Kahneman’s experiments, System 1 — the experiencing self — seeks to maximize experienced utility: maximize the experience of pleasure and minimize the experience of pain, akin to classical utilitarianism (Bentham, 1789).

Meanwhile, System 2 — the remembering self — seeks to maximize decision utility: how to maximize and minimize the experience of pleasure and pain over time. Decision utility is more aligned with the economists’ idea of “maximizing personal utility,” but it is not the exact same thing because System 2 is not perfectly rational and can be wrong (Kahneman, 2011; pg. 876). So both systems are working — together and at odds — to make a decision about what to do in any situation. You are not one or the other; you’re both.

I find it helpful to think of this dilemma as a conflict of interests between two selves… The experiencing self is the one that answers the question: “Does it hurt now?” The remembering self is the one that answers the question: “How was it, on the whole?” Memories are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self (Kahneman, 2011; pg. 875).

Yes, your brain can want different things simultaneously. Meaning you can want two things simultaneously. The conflict is real and our “wants” are very dependent on the environment in which we are making decisions. And this means that with this model there’s no way to know what you “really want.”

But what about what we “should” want? Surely System 2, the one we identify with, the one that is trying to maximize our long-term utility, is the one we should listen to, right? It’s the one that thinks like we’re “supposed to” based on the Enlightenment Scorecard! The system that is slow, methodical, and organized. The one that thinks like a robot! Well… no. But clearly this is what Professors Thaler and Sunstein have in mind when they say people’s true preference is the choice they would make, “if they had paid full attention and possessed complete information, unlimited cognitive abilities, and complete self-control” (Thaler & Sunstein, 2008; pg. 5). Those are certainly compelling words to someone holding an Enlightenment Scorecard, but that’s also echoing the exact Straw Man Kahneman and Tversky showed doesn’t exist: the disembodied, consistent decision-maker that they called the Econ. So with no empirical justification, Thaler & Sunstein are saying we should want what System 2 wants. Or what an Econ would want.

Faced with the task of guessing what an individual ‘really wants,’ the libertarian paternalists implicitly assume that, given enough time and information, all rational individuals would reach the same rational, instrumentally-optimal conclusions: the choices of Homo Economicus’ (Rebonato, 2014; pg. 12).

Which is how Professors Thaler and Sunstein replaced the old Straw Man of the Econ with the new Straw Man — System 2.

It’s also worth noting that their analysis of how these systems work is not just wrong, it’s conveniently wrong.

Kahneman’s model demonstrates no objective way to say what you should want based on measures of well-being, because System 1 and System 2 are also sated by different things. System 1 loves it some feelings and System 2 loves it some goals and planning. Neither takes preference over the other; we need both. And we’re not even sure which is “better” in the long run, experiencing feelings or achieving goals, because we would need both to even feel happy about achieving our goals! And you don’t have to take our word for it. Here’s Professor Kahneman:

In part because of these findings I have changed my mind about the definition of well-being. The goals that people set for themselves are so important to what they do and how they feel about it that an exclusive focus on experienced well-being is not tenable… On the other hand, it is also true that a concept of well-being that ignores how people feel as they live and focuses only on how they feel when they think about their life is also untenable. We must accept the complexities of a hybrid view, in which the well-being of both selves is considered (pg. 927).

So why is it convenient that Professors Thaler and Sunstein are so sure that they know what you “really want?” Because in a world without Econs, souls in the pineal gland, or God, how do you make sense of:

  • Daniel Kahneman and Amos Tversky’s work that showed human beings are really bad at statistics? We don’t apply statistical thinking to everyday problems of living. Instead we rely on rules of thumb. For example, people “guestimate” the probability of an event happening based on how easily we can imagine it, remember it, and how badly we want it. This is the availability heuristic (Tversky & Kahneman, 1973). We use feelings as a way to do some quick and dirty “math”. However, when we “forget” that we’re using these rules of thumb, in the same way we might forget we have sunglasses on, this can lead to severe errors in the accuracy of our predictions. For example, when was the last time a major project at your work was done on time and under budget?
  • Richard Thaler’s (2000) own experiments suggest that we have limited willpower. We care more about the present than the future, and this can lead us to make decisions against our future interest, like saving for retirement. Thanks to this bias, we can stay stuck in a cycle of non-rational choices without us realizing it, like a duller version of Bill Murray in Groundhog Day.
  • Other social scientists have shown we lie to ourselves (Trivers, 2000).
  • We create stories after the fact to to explain away the true causes of our choices, apparently divorced from reality (Wegner, 2004).
  • We follow others without thinking. Who can forget the classic Stanley Milgram experiments that showed how easily people are willing to torture others, all out of a sense of obedience to authority? (Milgram & Gudehus, 1978) And yes, it’s been replicated (Burger, 2009).
  • We’re so narrow-minded we miss obvious clues in our environment, like a Gorilla thumping it’s chest in the middle of a basketball game (Simons and Chabris, 1999).
  • We evaluate evidence in self-serving ways (Dawson, Gilgovich, & Regan, 2002). When we want to believe an argument, we ask, “Can I believe this?”. When we don’t want to believe an argument, we ask, “Must I believe this?, which is a much higher demand to meet.

That all sounds bad, right? It’s quite the vulnerable picture of human nature. When you read that, it’s hard not to think that people are doomed to lifetimes of terrible decisions peppered with the occasional lucky cognitive break.

Then when you read about System 1 and System 2, it makes perfect sense to see System 2 as the victim. The slow, methodical thinker who’s always getting interrupted by the loud and aggressive System 1. But remember as we warned you before, these are not real people! These are just characters in a story; metaphors that describe a dual process model of cognition, not a literal moral drama unfolding in your brain. But if you think — like Thaler and Sunstein have — of System 2 as the victim, then you’re going to fall for the Straw Man. You’re going to start believing that System 2 is what we “should want” because it’s what we “would want” if we could just get System 1 to shut up for a second. And that sets a dangerous stage.

A stage ripe for a well-dressed man selling a fancy new technology. With a song and dance that’s only going to break your heart.

The Powerful v. Powerless

“Humans, more than Econs, also need protection from others who deliberately exploit their weaknesses.” — Daniel Kahneman, Thinking, Fast and Slow (2011)

When we read the Professors Thaler and Sunstein’s words, it’s hard not to hear their worldview: “decisional inadequacies;” “bad decisions; “predictably err.”

“Hundreds of studies confirm that human forecasts are flawed and biased. Human decision making is not so great either” (Thaler & Sunstein, 2008; pg. 7). This sounds compelling. But remember, they are comparing human decision making here to the fictional idealized Econ and falsely equating the Econ with System 2. Barring any empirical justification (which they don’t have), the justification to call decisions “good” or “bad” would have to be a moral one. But Thaler and Sunstein avoid that discussion by obfuscating “rational” and “good” with their language. It’s a Straw Man. But even with the Straw Man, going from “human decision making is not so great” to “we should decide for them” still takes a bit of a step. And clearly, Professors Thaler and Sunstein have taken that step because, “so long as people are not choosing perfectly, some changes … could make their lives go better” pg. 10).

And it’s on this foundation that things get most troubling. Thaler and Sunstein are saying we’re so crippled by all these cognitive biases that we can’t see them influencing us. We’re inhibited by them; an inherent flaw so deep and persistent in our brains that errors in thinking are always popping up and preventing us from thinking rationally and getting what we “really want.”

Put another way by a prominent behavioral economist, “the seething spring of sin is so deep and abundant that vices are always bubbling up from it to bespatter and stain what is otherwise pure.”

Oh wait, a behavioral economist didn’t say that. It was John Calvin, the 16th Century asshole who formed a theocracy in Geneva based on the argument that the citizens were too flawed with sin to really know what was best for them (which is also why he burned Michael Servetus at the stake in 1553).

But damn, it sure sounds like the same logic, doesn’t it? We’re born with all these innate flaws that prevent us from making decisions that lead to us getting what we “really should want.” If only we weren’t burdened with all this, “wrong thinking” we would be truly free…

This worldview is old. In fact, we keep falling into it whenever powerful people consider less powerful people beyond hope and an idea pops up that lets the powerful people think they can help the broken people by making their decisions for them. Gilgamesh threatens violence. Plato suggests the course of men’s lives be dictated by a small caste of Philosopher Kings. Authoritarian theocracies in 16th Century Geneva burn heretics to set an example. B.F. Skinner wrote Walden II. “Sure we have power,” they say, “but we’re using it to help people!” And it’s not just sexy to the people with power. We’re all seduced by it.

Let’s think back to Google and the frosted shame-cake. You didn’t want that cake (except you also kinda wanted that cake). That conflict sucks. And spending the energy to “make yourself” resist something (that you also kinda wanna wanna do) is really hard. So what if you didn’t have to “make yourself do stuff” any more? What if you could finally solve that eternal conflict between what you “want to do” and what you “should do?” by waving a magical tech wand? What if there was an app that could finally free you from all the consequences of all the stupid decisions you made when you were tired, angry, drunk, or horny? Who wouldn’t want a little “nudge app” to be a better person?

So yeah, it’s sexy for both the designer and the user. It’s sexy because life is so damn confusing and uncertain. But do you really think an app is going to solve a riddle that’s been beguiling poets’ hearts for 5,000 years? Do you really think you’d be better off if you were a little less human?

And even though people screaming, “save us from ourselves!” might make it more convincing, it doesn’t make it more right.

Stop Nagging and Start Motivating

A practical technology that gets people to do stuff is nothing new. Humans have developed plenty of technologies over the years that can be used to get people to do stuff. Gilgamesh was using laws, rewards, and punishments 5,000 years ago, and the Romans were “nudging” people to engage in trade by building roads all over their empire. Even Don Norman was writing about how to create subconsciously awesome design using cognitive scientist J. J. Gibson’s concept of affordances in his 1988 classic, The Design of Everyday Things.

And as we’ve shown, behavioral economists like Kahneman might have demonstrated that “nudging” technology works, but nothing in the System 1 & 2 cognitive model tells designers objectively in what direction people should be nudged. It’s like inventing an atomic bomb, with no scientific way of knowing if, when, or on whom you should use it.

And as the 1965 Nobel Physics Laureate Richard Feynman (who worked on the Manhattan Project) said, “scientific knowledge is an enabling power to do either good or bad — but it does not carry instructions on how to use it.”

No, the instructions have to be written by people. All of us messy, imperfect people who — yes — are riddled with “cognitive biases.” But where Professors Thaler and Sunstein are wrong is that humans are not just powerless saps doomed to lifetimes of terrible decisions peppered with the occasional lucky cognitive break. Thaler and Sunstein’s worldview is only half right and it’s not even the interesting half. Because the mysterious human features that makes us fret about cake, dance when we could be praying, drink beer for breakfast, and lay awake at night wondering if we’re good people, are the same features that makes us pissed when we find out our employer is trying to manipulate our decisions in the cafeteria to save a buck.

Because even though it might be easier for economists and policymakers if we were, we’re just not ever gonna be goddamn robots.

We’re human beings. Out here trying to learn how to be the best “us” we can be. For ourselves, our families, communities, and humanity as a whole. And even though we’re conflicted and don’t always live up to our own standards, we’re trying to learn how to live with integrity. And we don’t want to be nudged, prodded, and coerced unless you give us a damn good reason why it’s necessary.

And we’ve been saying all this loud and clear for at least 5,000 years. In the first story we wrote down, the gods didn’t send Gilgamesh, the perfect King to free his people by making them do stuff. He sent Enkidu, the Wild Man to convince Gilgamesh to knock it the fuck off and lead his people by example. To teach them how to be better people not nudge them into it.

What an Honest Design Strategy Looks Like

The designer simply cannot predict the problems people will have, the misinterpretations that will arise, and the errors that will get made. — Donald Norman, The Design of Everyday Things (1988)

Stewart Butterfield is the founder and CEO of Slack. And like Stevo, he’s a philosophy major. In a July 31st, 2013 Medium post addressed to his employees titled, “We Don’t Sell Saddles Here” Mr. Butterfield says, “The best — maybe the only? — real, direct measure of ‘innovation’ is change in human behaviour.”

“The software,” says Mr. Butterfield, “just happens to be the part we’re able to build & ship… We will be successful to the extent that we create better teams.” And Mr. Butterfield knows that you cannot trick people into being “better teams.”

“All products” he says, “are asking things of their customers: to do things in a certain way, to think of themselves in a certain way — and usually that means changing what one does or how one does it; it often means changing how one thinks of oneself… We are asking a lot from our customers.”

And to help them get there, to help them become better teams, “we need to make them understand what’s at the end of the rainbow if they go with Slack, and then we have to work our asses off in order to ensure they get there.”

This is not the description of a product. It’s the description of a process. A process of showing people what “being better” looks like, then slowly finding out how to support the needs of the people who want to make that journey with tools that support them along the way.

And no, it’s not sexy. It’s learning and teaching. It’s being so interested in the experience of users that they become interested in their own experience. That they become self-determined to be the better people they hoped they could be.

The Place for Nudging

If, as I believe, the ends of men are many, and not all of them are in principle compatible with each other, then the possibility of conflict — and of tragedy — can never wholly be eliminated from human life, either personal or social. — Isaiah Berlin, Two Concepts of Liberty (1958)

We’ve talked a lot of shit about nudging, but that doesn’t mean we don’t think interventions designed to work subconsciously on people have no place in a designer’s’ toolbox. The ethics of any design come down to:

Do the ends (as best as we can reasonably predict) justify the consequences of the means (as best as we can reasonably predict)?

In that regard nudges are no different than laws, rewards, badges, juicy feedback, and everything else on the UCL Behavior Change Wheel. And the techniques of nudging have their moral place (in our minds, trading a little personal freedom to prevent otherwise catastrophic consequences like Climate Change seem like an ethical no-brainer). Our problem is when they are being pitched as moral-free magic to powerful institutions with mixed motives and little transparency. Kahneman (2011) notes that “[a]n unscrupulous firm that designs contracts that customers will routinely sign without reading has considerable legal leeway in hiding important information in plain sight” (pg. 956). But that’s his justification to pass laws against the nudge, not his excuse to teach nudging to every tech company, government, and telecom. In a world where power is so unevenly distributed, giving everyone nudge-knowledge is not even a fight. It’s individuals against billion dollar expert armies.

And the conversation about “why do we want to encourage this behavior and at what cost?” can’t be skipped because a man in a suit led us in a song.

Not Robots, Even When We Wish We Were

Nothing is so difficult as not deceiving oneself. — Ludwig Wittgenstein, Culture and Value (1980)

What drives us to seek change — in our behavior and in the world — is our need to make sense of ourselves in the world. “Self-determination, as it turns out, is ultimately a problem of integration” (Ryan & Deci, 2017; pg. 650). For motivation designers, the job of helping people becomes not rescuing them from poor decisions, but facilitating the drive to continually make better ones.

It’s not sexy to treat users like human beings. It takes empathy and research and awkward conversations about “feelings” and “values.” It’s far sexier to think of users like the products we make for them: Programmable. Coercible. Nudgeable.

But we hope by now we’ve shown that people are not robots, even though we’re damn good at conning ourselves into thinking that everything might be better if we were. Eventually we remember that humans are messy, that hard things are hard, and that the Conflict of the Two Selves will always keep us up and night. Therefore there will always be more Straw Men to trick designers into thinking they know what’s best for people. And Monorail salesmen selling the same basic technology, even when they’re calling it something else.

Sometimes these cons are hard to spot (especially when there’s Nobel Prizes involved) but we think these bullshit behavior change technologies almost always have the feature of thinking human beings would just “be happier” if we were more predictable. Had less autonomy. Were more like robots. Were easier to nudge.

“It is not only possible to design choice architecture to make people better off; in many cases it is easy to do so” (Thaler & Sunstein, 2008; pg. 10).

And so maybe it’s useful to hear how John Haugeland — the late cognitive scientist and inventor of GOFAI (Good Old Fashioned Artificial Intelligence) — described the one feature of human beings that has that driven people crazy since the dawn of time and he predicts can never be overcome by technology:

It matters to us what happens in the world. It matters to us what happens to us. It matters to us what happens to our friends. It matters to us the progress of science and philosophy. All of those are desiderata, things to build a life on that one can summarize in the phrase “giving a damn.”

The trouble with computers is that they don’t give a damn.

Although identifying bullshit behavior change technology might be even simpler than that. It might be as simple as the advice Professor Haugeland gave when he was the head of the University of Chicago Philosophy Department to a young Steven M. Ledbetter: “never trust an academic who claims something is easy.”

Steven M. Ledbetter is the CEO of Habitry. Omar Ganai is the Head of Design and Research. Habitry teaches companies how to make their products more engaging with the practical application of motivation science.

References

Arkes, H. R. (1991). Costs and benefits of judgment errors: Implications for debiasing. Psychological bulletin, 110(3), 486.

Bentham, J. (1789). A utilitarian view.

Binmore, K. (2008). Rational decisions. Princeton University Press.

Bond, M. (2009). Risk school. Nature, 461, 1189–1192.

Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist, 64, 1–11.

Dawson, E., Gilovich, T., & Regan, D. T. (2002). Motivated reasoning and performance on the was on selection task. Personality and Social Psychology Bulletin, 28, 1379–1387.

Dennett, D. (2014). Can Rationality Be Taught. Youtube. Retrieved from https://www.youtube.com/watch?v=XCSLWlttG9Q

Gigerenzer, G. (2014). Risk Savvy: How to Make Good Decisions. Penguin.

Gigerenzer, G. (2015). On the supposed evidence for libertarian paternalism. Review of Philosophy and Psychology, 6, 361–383.

Kahneman, D. (2011). Thinking, fast and slow. Macmillan. Chicago (iBooks)

Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 237–251.

Larrick, R. P. (2004). Debiasing. In Blackwell Handbook of Judgment and Decision Making (pp. 317–337).

Malik, K. (2009). Strange fruit: Why both sides are wrong in the race debate. Oneworld Publications.

Milgram, S., & Gudehus, C. (1978). Obedience to authority. Ziff-Davis Publishing Company

Popper, K. (2002). The Logic of Scientific Discovery (Routledge Classics) (Volume 56) (2 edition). Routledge.

Thaler, R. H. (2000). From homo economicus to homo sapiens. Journal of Economic Perspectives, 14, 133–141.

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. Yale University Press.

Trivers, R. (2000). The elements of a scientific theory of self‐deception. Annals of the New York Academy of Sciences, 907, 114–131.

Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28, 1059–1074.

Smith, Adam (1759). The theory of moral sentiments.

Rebonato, R. (2014). A critical assessment of libertarian paternalism. Journal of Consumer Policy, 37, 357–396.

Stanovich, K. E., Toplak, M. E., & West, R. F. (2008). The development of rational thought: a taxonomy of heuristics and biases. Advances in Child Development and Behavior, 36, 251–285.

Sunstein, C. R. (2017a). Introduction: Agency and Control. In C. R. Sunstein (Ed.), Human Agency and Behavioral Economics (pp. 1–16). Cham: Springer International Publishing.

Sunstein, C. R. (2017b). Misconceptions About Nudges. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3033101

Ryan, R. M., & Deci, E. L. (2004). Autonomy is no illusion. In Greenberg, Jeff, Sander Leon Koole, and Thomas A. Pyszczynski (Ed.), Handbook of experimental existential psychology (pp. 455–485). Guilford Publications.

Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications.

Wegner, D. M. (2004). Précis of the illusion of conscious will. Behavioral and Brain Sciences, 27, 649–659.

Weinstein, J. R. (2017). Adam Smith. Retrieved October 21, 2017, from http://www.iep.utm.edu/smith/

Wittgenstein, L. (1972). On Certainty (English and German Edition). (G. E. M. Anscombe & G. H. von Wright, Eds., D. Paul, Trans.). Harper & Row.

Appendix

How We Could Be Wrong

Those among us who are unwilling to expose their ideas to the hazard of refutation do not take part in the scientific game. — Karl Popper, The Logic of Scientific Discovery (1934)

Libertarian Paternalism would be a kickass philosophy if these two conditions were true:

  • Nudges get people what they “really want” (this would have to be empirical and logical to their theory, not an appeal to common sense).
  • Nudges are easy to reverse in practice (not in theory).

In order to do this, we think what is missing from the argument for nudging is a coherent theory about when framing effects matter. Since Nudgers cannot rely on the Theory of Revealed Preferences nor on asking nudgees for their preferences, there’s currently no way to discern when people are making “real” choices or the choices are simply a result of framing. This is preventing us from a direct approach to improving decision-making, by which we can foster individual freedom of choice rather than have a central planner make choices for people.

So if Nudgers can show us when framing matters and when it doesn’t, we feel confident that these two conditions will be met and we’ll eat crow.

The Mistakes Nudgers Keep Making

We really have to think of reasoning the way we think of romance, it takes two to tango. There has to be a communication. — Daniel Dennett, Can Rationality Be Taught? (2014)

In writing this article, we have endeavored to be fair to the arguments and their authors. We think these interventions could have potential and we think Professors Thaler and Sunstein are nice, smart people. But as we were doing the research, we just kept noticing the same rhetorical mistakes. Over. And over. And over. Future nudging defenses please take note: we really need to get past these recurring rhetorical face palms if we want to have a meaningful discussion about the place of nudging in design.

Appealing to the Stone — Saying that not nudging would be “abhorrent” but offering no proof. [“In many cases, some kind of nudge is inevitable, and so it is pointless to ask government simply to stand aside. Choice architects, whether private or public, must do something” (Thaler and Sunstein, 2008; pg. 238).]

Argument from incredulity — Claiming nudging is good because how could it not be? [“If our proposals help people save more, eat better, invest more wisely, and choose better insurance plans and credit cards — in each case only when they want to — isn’t that a good thing?” (Thaler and Sunstein, 2008; pg. 238) ]

Argument to moderation — Claiming something is true because it is the “third way” between two extremes. [Title of Chapter 18: “The Real Third Way” (Thaler and Sunstein, 2008; pg. 252)]

Begging the Question — An argument whose validity requires its own conclusion to be true. [“Choice architects can preserve freedom of choice while also nudging people in directions that will improve their lives” (Thaler and Sunstein, 2008; pg. 252).]

Equivocation — Misleading by using terms with ambiguous meanings. [“nudge”,”decisional inadequacy”,“libertarian paternalism”, etc.]

Fallacy of composition — Implying that specific examples of the application of nudging prove that all nudges are good. [“This example illustrates some basic principles of good choice architecture” (Thaler and Sunstein, 2008; pg. 13)

False dilemma — Implying that there are only two options: nudging and total chaos [“Choice architecture, both good and bad, is pervasive and unavoidable, and it greatly affects our decisions” (Thaler and Sunstein, 2008; pg. 252).”

Nirvana fallacy — Confusing System 2 with a perfect, disembodied Econ, or that System 2 is what people “would” or “should” want.

Define. The. Fucking. Terms. — 99% of the ink spilled over nudging could have been saved if Thaler and Sunstein had given “nudge” a definition that was more discrete. Defining a nudge as “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives” (Thaler and Sunstein, 2008; pg. 6) is saying they’re, “anything that works that’s not laws or (significant) money.” That might be a simple definition but it’s not parsimonious. It’s also rhetorically useless. Suspiciously so. It’s like a politician saying he “supports the troops.” It doesn’t mean anything by design and it keeps driving everyone insane. And we know it’s driving Nudgers nuts, too. While attempting to draw the metaphor that nudges are like GPS systems because they “make life simpler to navigate” (and implying that nudges simply inform decision-making), Sunstein (2017b) says, “I wish that Nudge had made this point clearer, and connected nudging to the central idea navigability…At the same time, it is true that nudges counteract behavioral biases, and that some nudges work because of behavioral biases” (pg. 6).

THOSE ARE THE OPPOSITE THINGS. Those two thoughts, nudges work because they counteract biases and nudges work because of behavioral biases are OPPOSITE THINGS. And they’re in the SAME SENTENCE. The same paragraph in which Sunstein complains that anti-Nudgers are “misleading…to suggest that nudges ‘exploit’ such biases.”

THAT’S WHAT YOU SAID THEY DO. IN NUDGE.

“[C]hoice architects can exploit this fact to move people in better directions” (Thaler and Sunstein, 2008; pg. 59).

See? These fuzzy-ass definitions have their own authors confused and they have us using caps lock and drinking scotch instead of having fruitful discussions about the role of nudging in design. For this conversation to go anywhere, Nudgers are going to have to define what the hell a nudge is in words that promote communication, not obfuscation.

We all need this because, “knowledge is in the end based on acknowledgement” (Wittgenstein, 1969; 378).

Snarky Counter-Arguments

I.

The first misconception is that it is possible to avoid influencing people’s choices (Thaler and Sunstein, 2008; pg. 10).

This is confusing context with intent. Just because the environment has an impact on our decision-making ability, doesn’t mean that influence is always negative, nor is it a justification to intervene. This statement is just stating a proposition then implying an intervention is necessary.

It’s like saying, “the first misconception is that it is possible to avoid microwaves… so wear this tinfoil hat forever.”

II.

The second misconception is that paternalism always involves coercion (Thaler and Sunstein, 2008; pg. 10).

It does, despite the equivocation. The authors have just conveniently changed the definition of coercion to be, “precluding choice” rather than the more common definition of “influencing choice.” It is very possible to coerce people’s choices while not technically removing any options. Your employer can say, “I mean, I guess you could come in to the office after 9am…” which technically leaves every start time on the table… but we’d guess you’d still feel pretty coerced. Coercion doesn’t have to be overt to be effective.

III.

Nudges do not impact freedom of choice because they are “easily reversible.” If you don’t like the default option, you can pick the one you want.

Riccardo Rebonato (2014) has already done an amazing job of dispensing this argument out behind the woodshed, so we’ll just recap it in an entertaining way.

Nudgers are constantly getting their boxers twisted up that framing makes people pick the “wrong choices.” That we usually just pick whatever option is at the top of a multiple choice ballot or go with the default option. Clearly, they say, this is proof that we are incapable of making uninfluenced choices. Therefore, what’s the harm in nudging us to make the “right choice?” Especially since the people can easily reverse the default and exercise their freedom of choice.

The problem with this totally innocent-sounding argument is that it yet another example (oh God, there are so many) of Nudgers invoking an equivocation. Nudgers are confounding nominal choice — choice in name only — with effective choice — an actual choice. This matters for two reasons. 1) Because when you look at the research of autonomy on well-being, it’s not on nominal choice; it’s on effective choice. And effective choice is pretty damn important: “Literally hundreds of experiments….have examined the import of relative autonomy on human functioning” (Ryan and Deci, 2014). So saying, “we’ve protected autonomy, dignity, and freedom by preserving choice” is bullshit because the research on “autonomy, dignity, and freedom” is research on effective choice and the freedom of choice they’re preserving is only nominal choice. Equivocating the terms makes it look like they’re trying to have their cake and eat it too. 2) This one is more subtle but important to grok: the more potent the nudge, the less effective choice is actually present even though the nominal choice is constant. So equivocating them is really dangerous.

Just as elections with 99.9% acceptance for a candidate tell us more about the quality of the democratic process in place than about the virtues of the elected candidate or party, so choices accepted by ‘almost 100 per cent’ of the population tell us more about the skill and ingenuity of the choice architect than about the true preferences of the decision-maker (Rebonato, 2014; pg. 43).

When Saddam Hussein was re-elected President of Iraq in 1995 with 99.96% of the vote and 99.47% turnout, no one was like, “boy howdy, Iraq sure is chock full of freedom.” It was obvious that the “nominal choice” to vote for someone else was not an “effective choice” and that something fishy was going on. So it’s pretty damn weird when Nudgers point out that “almost 100% of Austrians have agreed to being organ donors, but only 12 % of Germans have” as proof that “easily reversible” nudging is so effective at preserving freedom of choice (Rebonato, 2014; pg. 5). It makes you think something’s fishy. That maybe even though there’s nominal choice, there’s no effective choice and what’s the actual value of “easy reversibility” of a nudge if no one ever takes you up on it?

IV.

“To the extent that good choice architecture allows people to be agents, and to express their will, it promotes dignity” (Sunstein, 2015; pg. 16).

How are they measuring “express their will?” As noted in the main article, Nudgers don’t have the Theory of Revealed Preferences to fall back on so they can’t really say what anyone’s “will” actually is. And as noted above, the more effective the nudge, the harder it is to really tell what people’s “will” actually is either. Like, who did Iraqis actually want to be President in 1995? Nominal choice without effective choice doesn’t seem very dignified.

V.

“Nudges actually promote autonomy by ensuring that choices are informed” (Sunstein, 2015, pg; 16).

This is muddying the definition of their own intervention. The authors said nudges were, “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives” (Thaler and Sunstein, 2008; pg. 6). We’re not reading anything about “informing choices” in that definition, and in fact it seems counter-intuitive to their own definition. Plus, informing people about the choices they can make already has a name: “informing people.”

We might put stock into this justification if Nudgers were only advocating for nudges that affected System 2, thereby improving empirical rationality. It’s too bad they aren’t. Oh, why aren’t they you ask? Because it’s easier to nudge System 1.

In the abstract, Waldron’s wish is an honorable one, and some nudges are meant to fulfill it. But as a matter of principle, the challenge arises when it is costly, intrusive, and difficult to make people better choosers in one or another domain — and when the net benefits of a System 1 nudge are far higher than the net benefits of a System 2 nudge. Often the best way to help people is to use some kind of social architecture. System 1 nudges, such as automatic enrollment, make life much simpler and better, and that is no small gain. (Sunstein, 2017a; pg. 7).

Once again, the argument for nudging has slid from a scientific one to an ethical one right before our eyes. And if they want to have an ethical argument, they have to account for the both the pros and cons of the means and the pros and cons of the ends. They can’t just wave their hands screaming, “NUDGE!” and constantly redefining what a nudge is to mean whatever they need it to mean to win the argument they’re in at the moment.

VI.

“Our ability to de-bias people is quite limited” (Sunstein as reported by Bond, 2009; pg 1191).

This seems like a pretty important thing to gloss over, because it would appear that you need “de-baising” for the whole “libertarian paternalism” thing to work. Because Nudger designers need “de-biasing” (unless the plan is to nudge the Nudgers…but then who nudges the Nudger Nudgers?!). And they need to show how hard it is to de-bais people in order weigh the pros and cons of nudging against the design alternatives.

But since they ain’t doing that homework we guess we’ll have to. Which wasn’t hard because Richard P. Larrick wrote a whole book chapter conveniently called, “Debiasing” in 2004.

Turns out, the vast majority of biased decisions are caused by just 3 common errors (Arkes, 1991):

  • Psychophysically-based error (System 1): Shitty guesses about the rates of stimuli.
  • Association-based error (System 1): Shitty recall of information in memory.
  • Strategy-based error (System 2): Shitty heuristics or “thinking tools.”

And there’s hella research on how to de-bias people making these errors including strategies like, “considering the opposite” and “training in rules.” And they work (Larrick, 2004)! Gerd Gigerenzer has made a whole career investigating how to de-bias System 2 and even wrote a kick-ass book about it called, Risk Savvy: How to Make Good Decisions (2014). The problem is, “[r]esearch on debiasing tends to be overshadowed by research demonstrating biases: It is more newsworthy to show that something is broken than to show how to fix it” (Larrick, 2004; pg. 334).

Who’s fault might that be, Nobel Prize Committee?

The thing that’s missing — and that libertarian paternalism can’t seem to figure out (because it’s whole justification for existing is screaming, “OMG OUR BRAINS ARE BROKEN” over and over) — is a coherent theory about when framing effects matter. This is preventing everyone, Nudgers and anti-Nudgers, from being able to have a discussion about when nudging is appropriate and when to take a direct approach to improving decision-making, by which we can foster individual freedom of choice rather than have a central planner make choices for people.

VII.

But isn’t rational just better?!??!

In a word…no. Because when one asks that, they are usually equivocating “rational” with “good” thanks to the Enlightenment Scorecard, and/or creating a false dilemma by implying that the only alternative to “rational” is “bat-shit crazy.”

To lend some weight to that “no,” let’s take a look at what the fuck “rational” even means.

According to Stanovich, et al., 2008 (Stanovich coined the terms “System 1 & 2”), cognitive scientists have identified two kinds of rational thought processes. Instrumental rationality is “[b]ehaving in the world so that you get exactly what you most want, given the resources (physical and mental) available to you” and epistemic rationality is “how well beliefs map onto the actual structure of the world” (Stanovich, et al., 2008; pg. 252). Put together, rational decisions can be conceived of scientifically as something like “decisions that maximize personal utility via perfect inductive observations of reality and perfect deductions of logic.”

In other words, thinking like a robot. And deviations from this standard of perfectly Bayesian calculation machines would be “non-rational” thinking.

So that’s two kinds of rationality. And we’re willing to give you that good epistemic rationality is better than bad epistemic rationality. Life would be way better if our “beliefs map[ped] onto the actual structure of the world” and we could all do accurate math in our heads (but that alone is not a justification for nudging because epistemic rationality can be improved without nudging (Larrick, 2004; Gigerenzer, 2014)). The problem is with the other kind of rationality: instrumental rationality.

Instrumental rationality includes the assumption that people do consistent Value Math divorced of outside influence. This assumption leads one to assume that people already know what they want before they engage in the rational thinking necessary to get it. Which we know is just wrong because as we’ve seen from Kahneman and Tversky, there’s no way to know what people actually want. So you can’t test for how instrumentally rational people are without major confounds from framing.

Take the example of voting in the Iraqi elections of 1995. How instrumentally rational was a voter in that election? You can’t know because the deck was so stacked. Without a Theory of Revealed Preferences to say, “what people did is what they wanted” and without a coherent theory about when framing effects matter, all you can really say is that Iraqi voters were probably making the decision that was most likely to result in them not getting tortured to death. But clearly, there’s more to Value Math than “valuing decisions that result in the least amount of torture.” The “exactly what you most want” in the definition of instrumental rationality has to mean more than just self-interest, but without revealed preferences or a coherent theory of when framing effects matter, self-interest is the only Value Math that’s left for “exactly what you most want.”

And it gets worse. Because even if you want to say that “self-interest” and “rational” are the same thing, rational thinking isn’t even better for maximizing self-interest.

Consider selfish individuals engaging in a Prisoner’s dilemma. Here rationality doesn’t save them from, but condemns them to, sub-optimal outcomes. And this sub-optimality applies both to society, and to the choosing individual. So, even if each individual had purely self-regarding preferences, appeal to pure rationality can be self-defeating, in the sense that listening to the System-II self would not bring about outcomes that a self-regarding System-II would be happy with: rationality is the problem, not the solution (Rebonato, 2014).

The root problem of assuming that “rational = good” is that you have to assume that people have consistent and reliable Value Math independent of context. And until we have a coherent theory about when framing effects matter, saying people are “irrational” doesn’t mean anything.

Snarky Counter-Counter-Counter Arguments

I.

In this context, ethical abstractions (about, e.g., autonomy, dignity, legitimacy, and manipulation) can create serious confusion. We need to bring those abstractions into contact with concrete practices. Nudging and choice architecture take many diverse forms, and the force of an ethical objection depends on the specific form. (Sunstein, 2015; pg 16).

So wait. They wrote a whole book collecting a group of interventions with common abstract features, created a new word (nudge) to define this group of interventions based on those common abstract features, then spent hundreds of pages talking about the unique benefits the interventions have because of these common abstract features.

But we’re not allowed to talk about the unique downsides presented by these common abstract features? Instead we can only talk about the unique downsides of specific examples of interventions with those common abstract features?

That makes no goddamn sense. This is the equivalent of saying:

“See these objects on this shelf? These are called books. Books are stacks of rectangular paper covered on three sides by cardboard and bound on a single side with glue. They’re better than anything in the world for storing and then presenting words to a reader.”

Then we say, “books seem heavy and like it’d suck to get them wet”

And they’ve responded to that observation with, “that can only be determined on a book by book basis.”

Nope. Wet books suck. And nudges are stupid.

II.

[G]overnments…have to provide starting points of one or another kind. This is not avoidable. As we shall emphasize, they do so every day through the rules they set, in ways that inevitably affect some choices and outcomes. In this respect, the antinudge position is unhelpful — a literal non-starter (Thaler and Sunstein, 2008; pg. 10).

We think they mean it’s “a figurative non-starter.” Let us check… yep, we’re starting so I guess it can’t be a literal non-starter.

Just because a choice has to be made doesn’t prove their choice is better, especially since the only alternative intervention they’ve given is no intervention. That’s like saying, “you have to marry someone, so it might as well be me.”

III.

And by insisting that choices remain unrestricted, [anti-Nudgers] think that the risks of inept or even corrupt designs are reduced (Thaler and Sunstein, 2008; pg. 10).

No we don’t. Again, they’re using the straw man of “doing nothing” and implying that nudging is the only option to reduce risk.

IV.

Libertarian paternalists would like to set the default by asking what reflective employees in Janet’s position would actually want. Although this principle may not always lead to a clear choice, it is certainly better than choosing the default at random, or making either “status quo” or “back to zero” the default for everything (Thaler and Sunstein, 2008; pg. 11).

Now they’ve introduced “random” as a new counter-option, but offered no proof that it’s worse than nudging. They just said it like it we all just know random is a bad thing. Like when Omar’s Mom says, “I mean you could just pick a random girl off the street to date…” Sometimes, hell a lot of the time, random is better than expert intervention. Like in picking stocks.

Also, did they just admit that, “this principle may not always lead to a clear choice?” Wait, what the actual fuck? If the choices aren’t clear, why the hell are we nudging people to them?! Did they just change their standard from the choice of a perfect, Econ-like System 2 who would “think like Albert Einstein, store as much memory as IBM’s Big Blue, and exercise the willpower of Mahatma Gandhi” (pg. 5) to the choice a few of our coworkers would guess is best for us to pick?

We wouldn’t use that standard to pick a default office pizza, let alone a default retirement savings account.

V.

A central reason is that many of those policies cost little or nothing; they impose no burden on taxpayers at all (Thaler and Sunstein, 2008; pg. 12).

They seem to be forgetting that the taxpayers are who they are nudging. So it’s not “no burden” unless one believes that being constantly nagged by the government doesn’t count. Oh, and all that friction introduced when one does have a preference doesn’t count as a burden either? There’s also the problem that the more effective the nudge, the truer this statement is. A perfectly effective nudge costs zero dollars because it offers zero effective choice. We haven’t looked it up, but we’re pretty sure Saddam Hussein’s re-election campaign in 1995 was really cheap.

VI.

Libertarian paternalism, we think, is a promising foundation for bipartisanship…If incentives and nudges replace requirements and bans, government will be both smaller and more modest. So, to be clear: we are not for bigger government, just for better governance (Thaler and Sunstein, 2008; pg. 14).

OK a few things. 1) how are they getting to the conclusion that a caste of professional government nudge designers would somehow be “smaller and more modest” than the current caste of professional government bureaucrats involved in requirements and bans unless they are assuming that the nudges will be 100% effective and 0% controversial? And 2) since when have both parties ever come together and agree on the proper use of any government intervention? Why the fuck would we assume they’d do that with a new tool that can subconsciously influence the minds of voters?

Scientist: “we made a ray gun that makes people vote for who every shoots them with it.”

Republicans: “oh well, we better make sure only the good people get their hands on this.”

Democrats: “oh well, we better make sure only the good people get their hands on this.”

VII.

If welfare is our guide, much nudging is actually required on ethical grounds, even if it comes from the government. A failure to nudge might be ethically problematic and indeed abhorrent (Sunstein, 2015; pg 16).

OK, ignoring for a moment that they’ve never given an objective definition of “welfare”, the argument that “things will be worse if we do nothing” requires proof that the intervention would be better than nothing, and proof that the intervention would be worse than other alternatives like de-biasing or randomness. And these double-blind randomized control trials could only be done to test individual nudges in specific populations and contexts. So all told, this is sorry-ass proof for the “much nudging” they’re calling for.

And seeing as how they get pretty pissed when other people make general claims about nudging because nudges, “take many diverse forms, and the force of an ethical objection depends on the specific form” maybe they should be careful about invoking the fallacy of composition to convince us that nudges are moral-free magic tech wands.

VIII.

You’re not offering an alternative!

First of all, we don’t have to do that to prove nudging is stupid. Secondly, yeah we are. We just didn’t get to all of it. The alternative is a design strategy built around supporting people’s Basic Psychological Needs. It’s based on Self-Determination Theory — a practical, multimethod empirical theory with RCT-proven interventions in lots of domains. And you’ll just have to subscribe to this publication to learn more about how to use it because holy shit, this post was already 15,000+ words and we need a damn nap.

--

--

Habitry
Practical Motivation Science

Practical Motivation Science for more effective products and content.