These Weapons Will Mathematically KILL You
Not literally, well maybe…
We are ignorant. It should be no surprise.
We are ignorant of the current data epoch. You know this. I know this. However, data has shown to have harmful consequences. Recently, we’ve learned the damage that Cambridge Analytics have caused. I’m a data aficionado. But the troubles with handling data has demonstrated that transparency has not been a priority.
Social media can alter our decision-making.
Corporations, politics, social media, and influencers are data hungry.
Sports have leveraged the ‘Money ball’ concept to seek any advantages.
The financial system has coined ‘Quantitative Finance,’ which is solely dedicated to implementing models to discover any arbitrage.
However, I am not against the use of data to improve decision-making. If social media curates articles or post that aligns with my beliefs, then kudos to these platforms. Algorithms can efficiently decipher patterns from Big Data, something we cannot do. The medical, financial, and education field, to some, extent, has shown better performance than humans.
So why I am writing this post?
Well, two reasons. First, I would like to write one book review per week. Hence, this is my first book review. However, I will incorporate my personality into these book reviews. I will not follow the typical book review format: beginning, middle, and end. Instead, I will write on the top lesson, controversies, or even recreate fictional stories using the book’s main idea. Or even recreating a fictional story from the book.
I think the title of the book should hint my first book review. “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” by Cathy O’Neil provides her understanding of Big Data.
Should we fear Big Data?
I do not think so. I believe Big Data can help us. If you are familiar with Daniel Kahneman, Richard Thaler, or Dan Ariely than I am sure you understand we are imperfect. Thus, shouldn’t innovation be a weapon that minimizes our imperfections? I believe so. Then, what is the concern?
First, let’s understand models. There are two types of models: unsupervised and supervised learning.
We control models that are considered supervised learning. We choose a model (linear regression, logistic regression, KNN). We evaluate these models based on a cost function. We evaluate the performance of our models on testing tests. We either tweak the model or choose another model. Simple enough.
The trouble stems from the unsupervised models.
These models are considered Black Box Models since their method of optimization is foreign. One example is neural networks, which does not require human interference.
How these Weapons of Math Destruction Can Kill You:
Imagine you live in a low-income community. You’re a young male from a minority background. You have no criminal background. Understandably, the police system wants to reduce crimes. Historical data indicates that your community has a high crime rate. Thus, the number of police is increased. But this does not solve the entire dilemma: police cannot solve crimes if there aren’t crimes to solve.
If the police want to reduce crimes, it needs to predict criminal activity. O’Neil argues their incentive to predict criminal activity can face some problems. The police will use historical data to predict crimes. Hence, low-income communities will have a police influx, which is not a bad thing.
But, low-level crimes are inevitable. The problem occurs with predictive models. If the police arrest people for small crimes, their predictive models are validated with these arrests. Hence, the police department justifies their behavior with the model’s accuracy. However, when their analysis is scrutinized, we realize this feedback loop.
I understand that theoretically, if we stop low-level crimes, then this should prevent more dangerous crimes. So, does cost-benefit analysis justify arresting and jeopardizing young adults for low-level crimes?
From the Huffington Post:
Recently New York City lost a federal civil rights challenge to their police stop and frisk practices by the Center for Constitutional Rights during which police stopped over 500,000 people annually without any indication that the people stopped had been involved in any crime at all.
Despite the fact that Black and white people use marijuana at the same rates, the ACLU found a Black person is 3.7 times more likely to be arrested for possession of marijuana than a white person.
Thus, if the police were locating a wealthier neighborhood, the results will be similar. But they are not targeting this wealthier neighborhood. Police target low-income communities which validate their models. People suffer detrimental consequences because of low-level crimes. And to be frank, who does not commit a low-level crime?
Here’s the cycle: There will be an influx of police in poor communities. Statistically, young adults will be penalized for low-level crimes. They don’t have bail money. Their records are affected. When they get out jail, finding an occupation is difficult. Now, they will have to make an income some way. Resources are scares. And the cycle continues.
Weapons of Math Destruction might appear harmless but with enough small bullets, these weapons kill you and kill you again.
The education system now implements machine learning models to evaluate a teacher’s performance. Unfortunately, the machine learning models are not transparent. Hence, teachers improved their performance. For example, O’Neil illustrates an incident. A teacher received an A one year. Thus, she continued teaching the same way but received an F the year after. Worse, she was fired. She transferred to a private school without comprehending how her performance decreased.
O’Neil describes another incident. Machine learning only understands data. Meaning, there’s no adequate method to measure the authenticity of the data. For example, if the previous teacher ‘inflated’ her student’s grades, the subsequent teacher would be affected. Hence, students with inflated grades will not perform as well.
While you might think that it might be best for the teacher not to know the models, but how will learn and perform better?
Weapons of Math Destruction can kill you without you realizing it. It’s like the underwater missiles, you cannot protect yourself against it.
You are a student applying for college. But is the application process the same for all students? It is not. Obviously. But are the differences deliberately manufactured? O’Neil believes so. Students from a more educated background have an advantage. First, it’s not monetary. At least not directly. Rather, it is information. O’Neil reasons that if a firm can promise students acceptance from Ivy League schools, then the models that Ivy League schools are using can be tweaked.
Here are a couple of articles that detail the concept:
- https://www.nbcnews.com/news/asian-america/company-will-guarantee-get-your-student-their-dream-college-price-n821791
- https://nypost.com/2017/11/11/this-life-coach-gets-teens-into-ivy-schools-for-950-an-hour/
Another example is “Top Ranks” list for universities. Top Ranks help universities gain attention which generates applications which lead to tuition. Thus, a university aims to be higher on the list.
Is there anything wrong with universities striving for top placement or firms guaranteeing acceptance from top-tier universities?
Yes and no. After all, life is a model. You all work on variables that will improve our desired outcome: monetary, happiness, or success. However, it also signifies that the models are unreliable. For example, some universities hired well-known professors on a part-time basis to increase their position in their ranking. Thus, this not improve its education but only its placement. Or they will offer amenities that are not correlated to education. Notice the discrepancy.
Universities adjust variables that improve their ranking. Since some of the models measuring universities do not consider education (which is difficult to do so), they do not prioritize education.
Weapons of Math Destruction only kills those who do not have power, similar to First World Countries took advantage of Third World Countries.
As much as I love data, I think it is important to acknowledge the drawbacks. Elon Musk fears the consequences of Artificial Intelligence. Moreover, the average American will be affected the most. Hence, the average American should be aware of the dangers and do their best to bring awareness and change to others!
WANT MORE…
If so, I suggest following my Instagram page. I post summaries and thoughts on a book that I have and am currently reading.
Instagram: Booktheories, Personal
Follow me on: Twitter, GitHub, and LinkedIn
AND if you liked this article, I’ll appreciate it if you click on the like button below. THANKS!