Dumb on Purpose

A Behavioral Criticism of Meritocracy

This essay presents what I think is a fairly novel problem with meritocracy. Namely, that meritocracy incentivizes people to pursue ideas they know are bad, but that they also know will earn them rewards. “Meritocracy” refers here to any system in which people are rewarded — financially, socially, or otherwise — in proportion to their merits (their achievements). The problem emerges from two features of meritocracy in practice:

  1. Poor measurements of merit
  2. The high cost of earning merit

We measure merit variously, but nearly all these measurements leave abusable semantic gaps. By abusable, I mean that one can take action to fulfill the measurement without truly achieving anything useful. The obvious, extreme examples are simply criminal: one can become wealthy by stealing, famous by lying, etc. Most examples are milder, but they’re common enough to be worrying: one can earn a high score on a test honestly without learning how to apply that knowledge in practice, make money off a contentless clickbait news article, win reelection despite failing to represent voters while in office, etc.

Secondly, earning merit, whether honestly or fraudulently, requires investment. Graduating from college, climbing the corporate ladder, and building a fan base all cost time and energy. Even a thief incurs costs and takes on risks to practice his “profession.”

Together, these two ideas create an incentive for people to continue pursuing bad ideas even after they realize the ideas are bad. It’s a little like a modified sunk cost fallacy. The fallacy says that it’s always a mistake to continue work on a doomed idea, regardless of how much has been invested into it — because the idea is doomed either way. Considering the inexact measurement of merit, however, an idea doomed not to create real value may still be able to earn fraudulent merit. If one has invested in that idea, continuing on may be the rational decision.

Imagine you set out to develop a new product, but once you’ve got a working version, you realize it’s much less useful than you envisioned. You could scrap it and restart, but you might be able to sell the bad product if you just market it the right way. You’ve got $100 in your bank account and a 3-year-old at home: what do you do? Even though you know the product is underwhelming, you can’t afford to restart and lose everything you’ve invested.

It’s hard to say definitively how big of a problem this is today, but it seems common. A 2015 YouGov poll found that 37% of British adults believe their jobs “[do] not mak[e] a meaningful contribution to the world.” ¹ Anecdotally, most of us can probably recall moments when we’ve fallen into this behavior. We’ve gone to work and done what our bosses told us even when we knew it was dumb, because we couldn’t risk losing our jobs or career progress; or we’ve learned to write in a special, professionally useless style in order to earn high marks on standardized tests; or we’ve finished degree programs that we’ve lost faith in, because the diploma would grant us merit regardless of the education it represented; and so on.

Note that continuing to believe an idea is worthwhile because you’re invested in it is a different phenomenon. That may also be a problem, and it’s one that isn’t as easily addressed, since it involves deep psychological issues. This essay deals exclusively with the phenomenon of continuing to work on an idea that you know is bad.

Solutions

As mentioned above, this problem is created by two properties of meritocracy: gameable measurements of merit and a high cost to attain any kind of merit. A couple directions that might help, then, are:

1. Measuring merit more accurately

If the semantic distance between measured merit and real merit shrinks, attaining merit fraudulently will be more difficult, and people will be less incentivized to knowingly pursue bad ideas. What does more accurate measurement of merit mean? It means matching incentives more closely to values: scrapping tests that measure only test-taking ability, using technology to gather more reliable data while preserving privacy, passing regulations on rent-seeking, etc. This approach says: if neoliberalism isn’t working, do it harder!

The good news is that improving measurements seems realistic. We have the tools we need to change data collection, algorithms, and reward structures. On the other hand, measurement of a concept as broad and abstract as “merit” will always be inexact, and that inexactness will always leave room for fraud. Efforts to “improve metrics” also tend toward state surveillance (see speed cameras), and there’s a limit to how far we can push that safely.

An example of the inevitable difference between symbol and referent.

2. Decoupling rewards from measured accomplishment

In a more general sense, people are bound into this dilemma by meritocracy itself: if earning merit is the only way to get anything in life, we are constantly exposed to this bad incentive. But earning merit is not the only way to gain. We are given many things “undeservedly”: the love of our parents and a set of inalienable rights by the state, for two. By further decoupling rewards from measured accomplishment, we would lessen incentives to work on dumb ideas knowingly. This is the socialist approach, more or less.

Since this method basically means abandoning meritocracy, it does promise an effective escape from meritocracy’s flaws. The success of removing artificial incentive systems, however, depends on the inherent goodness of the natural incentives underneath them. If we get rid of standardized tests because they incentivize learning the wrong things, we’d better hope that whatever people will want to learn without them is better. We should also acknowledge that human nature might have meritocratic leanings — an inborn emphasis on earning your place, maybe — and an instinctual meritocracy, if it exists, can’t be disassembled. In the case of something like universal basic income (UBI), we would be decoupling success in the job market from financial success to some degree, but also implicitly adding a new incentive to qualify for UBI, which is a new attack surface for fraud.

Since both approaches have serious flaws, the best answer is probably a mix of the two. We should use neoliberal artificial incentives when:

  • the semantic distance between measured accomplishment and real accomplishment is small (a big challenge!), and
  • taking these measurements does not require unacceptable breaches of privacy.

Distributed ledgers with private transactions are helpful novelties and may eventually provide transparent and reliable measurement with minimal breaches of privacy (they don’t do the semantic distance work, though — that’s up to us to design).

We should suppress artificial incentives (use social welfare) when:

  • the semantic distance between measured accomplishment and real accomplishment is large (as it often is and will be),
  • suppressing current incentives won’t create even worse ones, and
  • natural behaviors lead to good outcomes.

To me, this framework suggests that some more social welfare may be appropriate for work decisions like choosing a career, a product to develop, or a direction to research. These are all areas where definitive measures of success are elusive — they have a market value, but that measurement seems to have a high failure rate. These work decisions are also areas where we expect people’s natural inclination to point towards usefulness: that if the meritocracy were not forcing them to make a poor choice, they would make (what they believe to be) a more useful one. The idea is not to pay people to do whatever they want, but to subsidize getting out of doing things they hate or they think are wasteful. Of course, increasing welfare in these areas would add a new incentive to fraudulently qualify for the welfare program, which would need to be accounted for.

In all, this seems like a new argument for UBI or other work subsidies²: reducing the meritocratic pressure that forces people to make choices they know are bad.


¹ This poll was connected with David Graeber’s Bullshit Jobs. Contra Graeber’s theory, this essay suggest that the 37% of Brits who said their jobs weren’t useful may simply be stuck in this meritocratic incentive bind, rather than being victims of Graeber’s theory about automation leading to the proliferation of useless jobs. Their jobs are not necessarily “bullshit” at all, but they are probably a waste of potential.

² The simplest change might be to extend limited unemployment benefits to those who quit their jobs, rather than offering benefits only to those who are unemployed through no fault of their own (people who’ve been fired or laid off, and so forth).