W e are all fools in one way or another. Fools for love, fools for vanity, fools for greed and arrogance, laziness and envy.
In the fourth century AD, the Egyptian hermit Evagrius the Solitary classified human failings into eight major groups. In 590, Pope Gregory I revised Evagrius’ list to create the canonical seven deadly sins of the Catholic Church: lust, gluttony, greed, sloth, wrath, envy, and pride, which were to be defeated by the cultivation of the corresponding virtues of chastity, temperance, generosity (charity), diligence, patience, gratitude, and humility.
The struggle against vice and for virtue — what Aristotle called “the control of the appetites by reason” — is a constant. Laws punish, or at least threaten, our most egregious failings, and the cultivation of virtue of one kind or another thrives in venues as distinct as churches, diet and addiction programs, meditation studios, and CrossFit gyms. There are entire industries of self-improvement designed to help us overcome our real or perceived failings.
The study and classification of human failing also continues. Behavioral psychologists have identified at least 188 distinct cognitive biases that cause people to make bad choices, and economists have urged the design of systems that “nudge” people to make better ones.
But there are other industries, far larger and more pervasive, that nudge us to indulge our failings: restaurant portions too large, unhealthy snack foods lab-tested to make us crave more, advertising that persuades us to buy things we don’t need, financial firms that outsized gains while betting against their customers, politics microtargeted to appeal to voters’ prejudices rather than serve the public good, and, of course, “news” headlines designed to make us outraged or titillated rather than informed.
This is the world explored by Nobel Prize–winning economists George Akerlof and Robert Shiller in their book Phishing for Phools: The Economics of Manipulation and Deception. Phishing, of course, since 1996 has referred to the use of spoofed webpages, email addresses, or other forms of internet communication to lure unsuspecting users into giving up their secure credentials so they can be scammed. But Akerlof and Shiller use the term far more broadly, pointing out that our businesses, our politics, and our society use pervasive phishing-style deception to prey on human cognitive biases.
Economists have long based their theories on a world where rationality is triumphant, where the “invisible hand” described by Adam Smith masterminds an efficient market in which people, in the aggregate, make good choices. You can see the invisible hand for yourself at the supermarket checkout line, Akerlof and Shiller observe, where each customer joins whichever line is shortest and all the lines end up being about the same length. Departures from efficient markets are referred to as “market failures,” and economists have gone to great lengths to categorize them, much as the fathers of the Church categorized the vices and psychologists categorize cognitive biases.
Akerlof and Shiller propose a different theory, one with that marvelous quality of the best insights, which change the way you see, so the world never looks quite the same way again: Fraud and abuse are not market failures. There is an efficient market for everything, including manipulation, fraud, and abuse. For every phish, there is at least one “phool.” We each line up for the phishes that best match our own flawed estimate of our supposedly rational choices, and the phishermen efficiently learn to cast their lures where they can catch their self-selected prey.
Akerlof and Shiller published their book in 2015. If they had written it today, they would surely have included more than a passing mention of social media to join their accounts of the role of phishing in financial crises, advertising, “phood” and pharma, tobacco and alcohol, politics, junk bonds, and corporate looting for short-term profit.
In 2016, we saw a large-scale test of efficient market theory. Social media platforms are economies — marketplaces of content and feeling in which each of us shares what we like and others choose to view it. Our social media feeds are the equivalent of that checkout line. We click on the posts we want to see, but just as Akerlof and Shiller predict, what we choose is not what is best for us — the shortest line, the best price — but a choice that is shaped for us by invisible nudges that may not have our best interests at heart.
Those nudges come from two sources: the social media algorithms themselves, which rank and organize what we see, and from those who have learned to phish those algorithms.
When the Facebook News Feed team designed their algorithms, they believed that by showing us more of what we liked, shared, and commented on, they would bring us closer to our friends, give us news and information that those friends collectively thought was worth reading, and create networked communities of interest, and by doing so, create a great advertising-based business for themselves. They didn’t account for the phishermen.
The 2016 U.S. presidential election demonstrated phishing on a grand scale. Fake news for profit, fake news as foreign intervention, fake people with fake social media profiles posting fake stories, A/B tested to see which ones would be reshared by the largest number of phools. And traditional media, long the gatekeepers of trust, piled right on, because for them, the efficient market told them to abandon that role and instead find the headlines and stories that would garner the most attention and thereby the most advertising revenue. Social media platforms were completely unprepared to respond to the way in which not only their users but also their algorithms were being phished.
Most dismaying of all, we have begun to realize that the social media platforms themselves rely on a gigantic phish of their users. “The thought process that went into building these applications, Facebook being the first of them,…was all about: ‘How do we consume as much of your time and conscious attention as possible?’” Sean Parker, Facebook’s founding president, said in a recent interview with Axios. “And that means that we need to give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever.…It’s a social-validation feedback loop…exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”
In the end, Parker came to the same conclusion as Akerlof and Shiller: Manipulation of human cognitive bias is not a market failure. It’s intrinsic to the design of social media and, as it turns out, to all advertising-based services.
How do we get out of this phish? A renewed emphasis on subscription business models has brought at least some publications back to focus on truth in journalism, but they are now a tiny part of the media ecosystem. And Facebook, so far, has been too slow to respond.
“I believe we must be extremely cautious about becoming arbiters of truth,” Mark Zuckerberg wrote in response to the first reports that fake news on Facebook had perhaps shaped the outcome of the presidential election. He is right to be cautious, but Facebook must step up to the challenge. Once Facebook began curating the news feed to show some postings in preference to others based on which ones get the most enthusiastic response, it made itself the arbiter of attention, with attention made proxy for truth. There’s no way forward but to get good at it.
Much as Pope Gregory I matched up each vice with its corresponding virtue, Facebook might do well to train its algorithms not just to exploit but also to recognize and respond to manipulation based on known human cognitive biases. What does virtue mean in an algorithm? What does virtue mean in a world of behavioral economics rather than the seven deadly sins? It’s time to find out.
There are modern-day philosophers of digital virtue, like Tristan Harris, whose campaign to make social media users more mindful and to get tech firms to develop applications that encourage “time well spent,” has garnered a great deal of attention, a small amount of soul searching by tech entrepreneurs, and an even smaller ration of action. What the economic lessons explored by Akerlof and Shiller teach us, though, is that the “resistance” to phishing must be social, political, and economic as well as personal.
Despite the evocative name Evagrius the Solitary, virtue is not a solitary pursuit. Social norms, laws, and regulations play an essential role. If Facebook and other social media platforms don’t step up to their responsibility, governments will do it for them.
Left to itself, the market does not automatically provide optimal outcomes, either for individuals or for society. The phishermen and us phools make sure of that.