Weapons of Math Destruction
How Big Data Increases Inequality and Threatens Democracy
I once heard Google Research give a talk on auction theory, specifically on the mathematical underpinnings of ad auctions. They presented a few auction algorithms, their pros and cons, and the meta question of how you measure the quality of the algorithm.
A good auction, they pointed out, must be easy to explain and fair. If the system is too complicated, advertisers will think they’re getting cheated.
I thought about this a lot as contrast to the data science presented in Cathy O’Neil’s book “Weapons of Math Destruction.” Each chapter tells a story of a different sector of the economy, and the way we’re using statistics and machine learning algorithms to judge people. There are chapters on:
- How for-profit colleges target people they could sell dubious student loans to
- Prediction models for who will commit crime, and how that influences policing and sentencing policies
- The trend of using personality tests to screen job applicants and filter resumes
In the ad auction, the power relationship implies that advertisers can insist on a basic level of fairness and transparency. O’Neil believes that, in most cases, we’re using statistics to increase the power imbalance between the people doing the judging and the people being judged. There’s no such fairness.
She reiterates three major principles of the worst offenders.
- These algorithms are opaque. Sometimes, they’re kept secret for dubious reasons. With neural networks, we can’t always meaningfully explain the choices that the algorithm is making.
- They achieve a sort of monopoly power. They can’t be appealed. People learn to game the dominant algorithm.
- They have bad or broken feedback loops. If there’s a bug in the algorithm, we need to be able to find it and fix it. But some learning models in use don’t have good measures of correctness that we can use to fix them.
As an example, consider the goal metric we might use for a crime prediction AI. An accurate metric might be “number of murders.” But murders are relatively rare, and hard to draw quality statistical correlations from. As a data scientist, I might add in a second goal metric “number of arrests,” which are more common, to give more statistical power.
Oops! I accidentally created a self-reinforcing feedback loop. More arrests tell the model to send more cops to the neighborhood, who then make more low-level nuisance arrests.
The book advocates an ethics of data science. She draws parallels to legal ethics. A famous legal principle says “It is better that ten guilty persons escape than that one innocent suffer.” Data science could have a similar principle where we’re willing to sacrifice some efficiency to ensure that innocent people aren’t judged on spurious correlations. People should have the right to appeal judgments by algorithms — like appeals in courts. People should have the right to ask how the algorithm judged them — like legal standards of evidence.
I thought a lot about how engineers could make that work, and what world that would imply.