“Weapons of Math Destruction”, by Cathy O’Neil, has been one of the books that I have had the pleasure of reading during these Christmas holidays. The goal of the book is clear: explain how Big Data and Data Science can become a powerful tool to generate or increase inequality and even threaten democracy. During the book, O’Neil presents a large set of examples in very different contexts (ranging from education to job search, from making policy patrols efficient to insurance contracting amongst others) to justify that claim.
Without going into many details about the book (no spoilers), O’Neil states that the algorithms that rule many decissions in our lives can become a Weapon of Math Destruction, or WMD, when they meet these three characteristics:
- Scale: the algorithm affects a large number of people.
- Secrecy: there is no way for the people affected by the outcome of the algorithm to know how this outcome has been calculated, so a feedback loop does not exist or is very limited.
- Destructiveness: even when it is not the goal of the algorithm, its effects are negative to a large set of people.
As it turns clear reading the examples clearly explained by O’Neil during the book, a lot of the algorithms that a lot of analysts claim are going to improve our lives, are doing just the opposite for large groups of people.
As I mentioned in my article “AI in 2018. My wishes (not predictions) for this year”, one of my wishes for 2018 is that ethical, legal and corporate social responsibility aspects associated with the use of the AI gain momentum. Avoiding WMDs seem like a good reason to do so. Some specific points that I think we as a society should work on:
- Give people the power to decide which data they want to be used in massive algorithms. This would definitely improve the effects of WMD on scale (data would have to be explicitly shared) and it would as well be an impediment to secrecy. Regulations like the European GDPR would definitely contribute to this.
- Regulate on the minimum information required to explain how massive algorithms. Although the latest AI techniques are definitely a great improvement, not knowing why an algorithm does what it does might imply great destructiveness. This interesting article from MIT’s Technology review calls this “The Dark Secret at the Heart of AI”.
- Avoid what O’Neil calls “proxys” in order to get Data into the algorithms that crearly contributes to biass. As mentioned before, there is no reason to believe Data Scientists deliberately wanting to deliver destructiveness in their algorithms. Nevertheless, when companies think they are using unbiassed data, they might be creating a proxy that generates unfairness. Companies should work on new methodologies to ensure none of these proxies are used, and even regulations should be made to avoid using this kind of data when it has been demonstrated that it acts as a proxy for sensitive information like race, sex or health.
Some of the tech giants are already promoting big efforts by creating non-profit organizations like the AI Now Institute, studying AI bias & inclusion factors, or publishing recommendations on how to “ develop and deploy new technologies in a way in which everyone can benefit”, as Microsoft points out in its recent “A cloud for Global Good” policy roadmap, including specific AI updates.
If you find this topic interesting, I highly recommend you to read Cathy O’Neil’s book. If you don’t have the time to do so, but still find this interesting, I hope you enjoy the following video, where O’Neil herself explains the concept of WMDs.