Review ~ Weapons of Math Destruction

Eric Tang
Eric Tang
Sep 9, 2018 · 4 min read
Weapons of Math Destruction, by Cathy O’Neil

This decade’s been filled with utopian predictions about the power of “Big Data” to solve our transportation crisis, transform our government, and (even) make us immortal. On the other hand, these grand visions have been accompanied by equally dire threats that these algorithms will ultimately wipe out humanity. In the middle of this high-strung debate, Cathy O’Neil offers a solid, grounded primer on the ways that algorithms can hurt ordinary citizens.

In Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Cathy O’Neil bounces between several sectors where “Big Data” has reared its head. She writes about credit score algorithms, teacher-ranking algorithms, and voter-targeting algorithms. She also includes her own compelling journey to disillusionment with the world of corporate data analysis. Throughout, O’Neil writes about the qualities that destructive algorithms share. In particular, if you’re considering building anything remotely like the above algorithms, here are two things to avoid.


(1) Black Boxes

Black Box (n.): A set of data goes in. A score comes out. What happens in between? Who knows!

No one really knows how the game is played / The art of the trade / How the sausage gets made

One disturbing example O’Neil presents is a model for evaluating teacher performance in Washington, DC. By analyzing student demographics and test scores, Mathematica Policy Research claimed that they could identify underperforming teachers by comparing expected student gains with actual student gains. In 2009, Washington’s chancellor of schools announced that the district would use Mathematica’s model to identify and fire underperforming teachers.

Intuitively, this model’s quite disturbing. To assume that thirty students’ test scores can capture a teacher’s character and potential seems ridiculous. O’Neil explains another major problem with the system: teachers couldn’t see how the model worked. First, the algorithms that determined their fate were proprietary secrets, sealed to the general public. Second, even if the teachers had access to the algorithm, the algorithm could give them no advice on how to improve their teaching — certainly not in the same way that a human could.

Yet the model had real consequences for teachers: in 2010, Washington laid off the “worst” 5% of its teachers, using a metric partially determined by scores on Mathematica’s model. These included teachers who were lauded by their communities as excellent educators. The model caught none of that.

O’Neil presents plenty of other opaque models with real-world consequences. Algorithms predict whether defendants will commit another crime; that score is then used to determine prison sentences. Companies use personality tests to evaluate candidates, hiring and firing based on those scores. The subjects of these models rarely get to see why the models make their decisions, leaving them completely in the dark.


(2) Bias

In the last few years, plenty of authors have written about the dangers of biased algorithms. O’Neil presents her own clear overview of the topic, showing that algorithms are far from fair, blindfolded judges.

Take programs like PredPol, CompStat, or HunchLab (I really don’t name these things), which all seek to analyze crime records and determine where police officers should spend their patrol time. These programs have been implemented in cities across the country, including major hubs like New York and Philadelphia. Officers feed in past crime data and get a map showing where future crimes are likely to occur. Seems like a neat idea.

But the existing data fed into the model reflects existing bias in the police force. Police are more likely to stop and frisk, use force, and arrest citizens in poor, African-American and Latinx neighborhoods. These neighborhood’s existing arrests will be fed into the model, leading the model to suggest that cops continue aggressively patrolling these neighborhoods.

I understand that these models are more complex than I’m giving them credit for. I understand that there may be more sophisticated ways to engineer bias out of these systems. Still, I’m skeptical of how successful those efforts are, or how widely they’ve been adopted. Even a blatantly racist, classist model can still seem authoritative to the average citizen.

And that’s likely the greatest danger of these models: they paint a glossy sheen of “fairness” and “science” over the same old bias. They give us a false sense of objectivity. I’m reminded of movements like eugenics and phrenology, attempting to graft a scientific veneer onto simple, old-fashioned racism.


O’Neil candidly observes that humans have these two flaws, too. Humans are sexist and racist. Human decision-makers — from bureaucrats to judges to bankers — are often opaque, offering few explanations. Still, the major point O’Neil makes is that we’d be too optimistic to assume that algorithms will end these issues. In many cases, algorithms worsen them.

In general, reducing human beings (teachers, prisoners, employees) to a few characteristics can be alarmingly destructive. Using algorithms responsibly demands humility: an awareness of what we can and cannot accurately measure. Using algorithms responsibly also depends on how we act on their predictions. We could, for example, use a teacher-rating algorithm to fire “underperforming” teachers, or use it to identify potentially struggling teachers and offer them professional development. In the closing chapters of Weapons of Math Destruction, O’Neil offers a few rays of hope, but the overarching tone is that of a warning.

tl;dr Big Data responsibly, kids.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade