Knaves and moralists in the subway
In Washington, as in Madrid, London, and many other cities, subway travelers abide by a code of unwritten laws. Escalators are no exception. At Dupont station, around a 15-minute walk from the White House, there’s a fairly long one that moves thousands of people during rush hour. If you ever use it, you’ll have to choose between standing to the right (if you’re tired or just not in a hurry) or to the left (if you feel like walking up to get out faster). All this may seem a little trivial, but it’s actually fascinating: every day a myriad humans fulfill the same social norm without there being a central authority demanding it or enforcing it. Giving way to those in a hurry is something that doesn’t show up in any law or regulation –in fact, its exact origin is not known. And yet, the norm endures.
But why is this? In London we could blame the signs that politely tell us to “please stand on the right”. This informs everyone that such a rule exists, and thus facilitates compliance. And, who knows, the presence of the sign could also convey that violating the rule may involve some type of punishment — I have no idea, but I would love to know. The fact remains that in Washington there are no signs that suggest there’s a cost for non-compliance.
Violating the rule may help us understand what’s going on. If at rush hour we decide to move left and block the way on the escalator, two kinds of things will happen. First, it’s likely that someone walking up the stairs will convey more or less subtly that we better move –options for the signal include an “excuse me,” some huffing and puffing or perhaps a discreet poke depending on the context. This is called second-party punishment because it’s coming from someone directly involved in the violation of the rule rules. If we don’t move, the lady can’t get through. Ergo, the lady tells us to move.
But the most interesting part is the second type of reactions: the ones that come from those that aren’t directly involved. I’m referring to the fact that if we block the way, in a few seconds we’ll notice disapproving gestures and looks (some more conciliatory or sympathetic than others) coming from our fellow travelers, regardless of whether our behavior affects them personally. All these wonderful people are carrying out a duty that’s critical for the survival of any social norm (and of civilization, let’s be clear): altruistic punishment, which involves paying a personal cost to enforce a prosocial norm.
Let’s talk about experiments, though. Fortunately, in the last twenty years there has been an explosion of interest in the factors that explain why social norms and, especially, prosocial behavior are maintained. Let’s imagine a simple game. There are several players sitting around a table and each one receives a few coins at the beginning of the game. In the middle there is a pot. During each round, each player chooses how many coins to contribute to the common pot. Afterwards, the contents of the common pot are doubled and distributed among all players. After a few rounds — say, twenty or fifty — the game is over and each player is left with whatever she has earned –including whatever she keeps from her starting coins. Note that the doubling of the pot at the end means that the largest possible pie is reached when all players contribute all of their coins, every single round. But, of course, this is not the optimal strategy for the individual player.
In experimental economics this is called a Public Goods Game, because it shows how willing people are to contribute to a good (the common pot) that will be distributed equally at the end of the game, regardless of how much each player has contributed. This is key, because if a free rider decides not to contribute at all throughout the game, he will still receive an equal share of the pie.
Whenever this experiment’s done (according to the Fehr and Gaechter 2000 methodology), a couple of patterns consistently emerge. At the beginning, quite a few of the players contribute to the common pot — perhaps showing their goodwill. However, as the game moves on, things turn awry. Each round, some of the contributors get frustrated with the free riders and stop contributing. Although there is usually a small hardcore group of cooperators (Peter Turchin calls them saints) who keep contributing rain or shine, after a few rounds cooperation unravels. As a result, trust evaporates; the free riders have won. The players may have earned more coins if they had just contributed, but it couldn’t be.
It’s pretty sad, right? We can all come up with real life situations where something similar has happened, whether it’s the proverbial village commons that are depleted due to individual overconsumption, some political negotiation, or most student group work. People really don’t like investing resources and effort in a common project while others take advantage of them. It’s the classic tragedy of the commons.
However, something fantastic happens when we shake things up a little. Let’s say we change the game to add a punishment option. Now, after each player decides how much to contribute to the pot, and after that amount is revealed, each player has the option of spending some of their money to “punish” whoever they want. For example, I can spend coin to lower another player’s earnings. The a priori effects are ambiguous: are people willing to spend money just to punish others, even though it doesn’t earn them anything?
The answer is a resounding yes. When we create a vehicle for punishment, the frustrated cooperators are quite dedicated to chasing and punishing the free riders. Most importantly, they are willing to pay an individual cost to enforce standards of conduct even though they do not get anything from it. Our dear friend, altruistic punishment, shows up.
The effects are quite big. In the “society without punishment” cooperation disintegrates and practically disappears. In the second society, however, altruistic punishment manages to both sustain the initial levels of cooperation, and also increase the fraction of cooperators every round. That is, free riders react and begin to contribute to the common pot after noticing that their behavior is frowned upon –and that it’s costing them coins–. Contributions eventually stabilize at high levels.
Why do these patterns occur and what can we learn from them? What we know from experiments (and in real life, many would say) suggests that there’s three types of people. As I mentioned before, those who contribute at all costs are our saints. Those who try to take advantage of the rest are free riders or knaves. Peter Turchin calls the third group moralists (in Spanish I called them justicieros to convey that there’s some part of vigilantism to their behavior). They not only believe in the social norm but are willing to punish those who don’t. The funny thing is that moralists are usually a majority –if you’ve gotten annoyed and perhaps frowned at people who watch videos with no earphones in public transportation, chances are you may be one–. In most experiments they usually surpass 50% of the population, and sometimes they get close to 70 or 80%.
But moralists are also notorious for their volatility. They tend to adapt their behavior to what they think of the people they’re interacting with –their beliefs and expectations–, and they update those beliefs with the new information they receive round after round. A moralist who knows that he’s surrounded by free riders will behave like one of them and will not contribute, which is why cooperation evaporates in the absence of punishment. However, if he thinks that he is surrounded by saints (or other moralists), he will contribute to the public good.
Is all this applicable to real life? Probably. A few months ago I talked about trust and how informal institutions were just as important as laws and regulations. A society in which there are no instruments to punish the knaves –Gambetta’s work on southern Italy and the Mafia are a clear example– will have less of an ability to provide public goods and to sustain prosocial norms. Actually, even the design and nature of the punishment instrument are key to determining if it will succeed, although that’s an entirely different story. Even resilient norms may stop working if the instrument fails. If there’s something that exasperates Washingtonians, it’s the presence of a large group of tourists in the subway (Update: the terrible quality of the subway exasperates them even more). Why? If there’s a small group, altruistic punishment works. They end up moving to the right hand side. But if there is a large group, the punishment mechanism struggles and ceases to be effective. The rule stops applying until they have left the station and everything returns to normal.
Despite all the buts, I’d like to think that the message of the experiments is optimistic. Humans are capable of both designing good norms and enforcing them autonomously (sometimes even unintentionally). But sometimes we don’t realize how critical this ritual of contributing and punishing can be. And maybe it’s time to change that, because a country’s informal institutions are as important as its written rules. In addition to designing good policies and laws, let’s make life easier for the moralists around us. After all, their scolding may be one of the pillars of human civilization.
Text originally published in Spanish in Jot Down Magazine.