The Morality of Manipulation
Nir Eyal
4.98K46

The Complete Moral Bankruptcy of Manipulating Human Psychology To Turn Users Into Addicts

The “Morality Of Manipulation” Only Exists On A Spectrum From “Highly Immoral” to “Absolutely, Relentlessly Evil.”

“It is difficult to get a man to understand something when his salary depends on his not understanding it.” — Upton Sinclair

I’m sorry, Nir.

Given that some (all?) of your income comes from teaching software makers how to leverage BJ Fogg’s discoveries on behavioral psychology for fun and profit, you must surely be one of the least qualified people to define the moral guidelines around the subject of digital psychological manipulation.

Your essay on The Morality of Manipulation is so profoundly detached from even a basic understanding of human nature and the reality of self-interest that I’m still not sure whether or not you are trolling us.

It would be one thing if your behavior-design consultancy had a rigorous selection process (vetted by neutral third parties) that worked with the best possible faith to ensure that every client you took on had firm moral groundings and a strong appreciation of the history of totatalitarian systems.

But alas, it does not.

And while there are a few hints at moral unease in your recent essays, it‘s clear that you have not yet fully faced the near-complete moral bankruptcy involved here.

More concretely, do you know of any moral frameworks outside of nihilism that allow for indiscriminately teaching entrepreneurs and product designers how to manipulate the weakest links in human cognition to make money by turning their users into addicts?

I don’t.

In the hands of self-interested parties, your Manipulation Matrix makes zero sense.

It seems that you created this simple 2x2 matrix to help entrepreneurs, employees, and investors decide if ensuring their software triggers frequent but unpredictable dopamine releases in their users’ brains is moral or not.

As you write:

To use the Manipulation Matrix, the maker needs to ask two questions. First, “Will I use the product myself?” and second, “Will the product help users materially improve their lives?”

According to you, if the person or team who will make money by turning their users into addicts concludes that they, themselves would use their own software AND that software “improves its users’ lives” the person or team in question should feel free to turn their users into addicts.

As you say:

While I don’t know Mark Zuckerberg or the Twitter founders personally, I believe from their well-documented stories that they would see themselves as making products in this quadrant.

And if the makers will use their own products BUT can’t reasonably claim that these products improve users’ lives, then the makers are entertainers. And they, too should feel free to turn their users into addicts.

After all, you argue, entertainment is a form of art:

Entertainment is art and is important for its own sake. Art provides joy, helps us see the world differently, and connects us with the human condition.

What’s more, even if the entertainment offered by an addicting product is so shallow that it borders on non-entertainment, that’s ok, too, because the fate of any “hyper-addictive” piece of entertainment is to flame out and die.

You point to Farmville, Angry Birds, Pac Man, and Tetris to make that point, adding:

Art is about creating continuous novelty and building an enterprise on ephemeral desires is a constantly running treadmill.

To sum up your argument as fairly as I can:

Anyone who plans to use her own products AND believes that these products provide meaningful value OR entertainment to their users should have no qualms about manipulating their users’ pscyhology without disclosing to those users exactly what’s being done to their brains.

While I don’t know you personally, Nir, I believe from the reports of professional acquaintances and various things I’ve read that you are a genuinely kind, well-intentioned man who sees yourself offering services in the “facilitator” quadrant.

But with all due respect to your reputed kindness and good intentions, the core arguments in your essay and the moral logic of your Manipulation Matrix are complete and utter nonsense.

The true nature of the engagement >> revenue >> addictive products loop (and its utterly immoral consequences)

Here’s the hard truth: any business whose revenue model is built on a direct correlation between “engagement” and revenue has every incentive to find new, faster, and more efficient ways to make their products addicting.

This is true for cigarette makers and alcohol companies. It’s true for junk food brands. It’s true for slot machine manufacturers and the casinos who buy from them.

It’s true for opioid manufacturers and streel-level meth and heroin dealers.

And it’s equally true for the founders, exectives, and employees of “free-to-play” games that monetize with in-app micro-transactions and online social networks that monetize with advertising.

Yes, it’s true that while the harms inflicted by cigarettes, booze, meth, and junk food on their most “engaged” users are both well-documented and blatantly visible, the harms inflicted by slot machines, free-to-play games, and addicting social software on theirs are much less obvious.

Specifically:

  • Cigarette addicts tend to end up with terrible skin, yellowed teeth, and diseased lungs.
  • Alcoholics tend to ruin their livers and often destroy their lives and the psychological well-being of the people who love them.
  • Junk food addicts tend to end up with failing pancreases and fat bodies full of clogged arteries.
  • Meth addicts become toothless trainwrecks and heroin addicts end up overdosing and dead.

Gambling addicts, on the other hand, tend to suffer less visible, quieter devastations.

Sure, they’ll occasionally destroy their finances, lose their jobs and alienate their loved ones. But the visible damage to their bodies is a second-order effect of their gambling addictions: the ravages of chronic stress.

Likewise, people hooked on addicting games like Clash Of Clans or addicting social apps like Facebook and Instragram don’t tend to suddenly explode in weight or lose half their teeth in one go.

Sure, you’ll hear occasional stories of new parents literally letting their babies starve to death while they raised a virtual child in a game, but those are the wild and crazy exceptions.

The damage experienced by mobile gaming and social networking addicts tends to be much more subtle:

  • Feelings of anxiety while away from one’s phone
  • Bigger propensities to procrastinate
  • Harder times following through on challenging but essential tasks
  • Recurring failures to be mentally present while spending time with family and friends.

Yes, those harms are FAR LESS SEVERE to an individual than, say, lung cancer, cirrhosis, meth mouth, or chronic obesity, but they are still harms.

And unlike the harms from cigarettes, liquor, junk food, and meth, the harms from addictive technology are have not yet been researched enough to prove.

But as the weaponization of Facebook, Instagram, YouTube, and Twitter by enemies of the United States’s democratic system makes clear, optimizing for engagement at internet-scale can have profoundly harmful near-term consequences for civic society.

As for the long term consequences of creating dopamine-fueled filter bubbles, backed by business models that generate billions of dollars a year in profit on the back of emotionally-gratifying clickbait?

It’s too soon to tell, but it ain’t look so pretty from here.

Trusting the creators of any product to make accurate moral judgments when those judgements impact their profits is completely unrealistic (at best).

Likewise, expecting the managers, employees, and investors of any business to make clear-headed objective evaluations about the “value” or “entertainment” provided by their creations is so naive that it’s hard for me to accept that you really thought hard about what you are proposing.

We all wish to believe that we are good people, doing the best we can with the circumstances we’ve got.

Indeed, the human propensity and desire to believe in one’s own innate decency is so strong that:

  • Tobacco companies have no problem staffing themselves with employees who spend their days figuring out how to grow demand for carcinogenic tobacco products and fight anti-tobacco regulations in developing countries around the world…
  • Dozens of competent people with high degrees of self-regard can spend months working to undermine taxes on sugary drinks around the world, despite abundant evidence of sugar’s toxic effects…
  • Technology entrepreneurs, software developers, product managers, marketers, and executives at multi-billion dollar technology companies can devote tremendous resources to finding new and better ways to hook new users and keep their existing users addicted and engaged, despite increasingly amounts of evidence of the political polarization and mental health downsides that result…

And still wake up every day and tell themselves they are doing nothing wrong.

Near the beginning of your essay, you write:

Yet, manipulation can’t be all bad. If it were, what explains the numerous multi-billion dollar industries that rely heavily on users willfully submitting to manipulation?

There are plenty of explanations for the creation and persistence of the many multi-billion dollar industries that rely on users submitting to manipulations that turn them into addicts.

Almost none of those explanations start with happy, balanced, well-functioning and fully-informed humans consciously deciding to hand over their willpower, control of their own attention, and the contents of their wallets in exchange for whatever.

So Nir, I’m afraid that when it comes to people and companies who profit from turning their users into addicts, there is no morality of manipulation.

There is only a spectrum of immorality, and it goes from “Highly Immoral” to “Absolutely, Relentlessly Evil.”