Cathy O’Neil has been one of the most important public voices raising concerns about the indiscriminate use of algorithms in decision making and the danger this presents to society. For many of us, her book ‘Weapons of Math Destruction’ has been a powerful motivator for our work and for our students, which makes it all the more puzzling that she wrote a New York Times Op-Ed that accuses academics of “being asleep at the wheel” when it comes to talking and writing about the role of algorithms in society. Here are four ways in which her article incorrectly frames the issues and misrepresents the underlying facts.
It punches down and not up
Academics (and researchers in general) are fighting a heavily asymmetric battle against the inappropriate use and abuse of algorithms. We often don’t have access to data, or to the underlying algorithms (because they’re often proprietary) and it might even be illegal to probe APIs to determine what some of these processes are doing. Despite these hurdles, there is an active and thriving academic community working to expose and address these problems. So why pick on the people trying to fix them? Instead, why not hold accountable the ones with all the power and resources that are perpetuating them?
It pits academia against industry
In framing the discussion around academia’s failure to serve as an effective backstop against malicious or careless industry actors, the article has created a false dichotomy between academia and industry. There are many thinkers who have pioneered efforts towards better and fairer algorithms while working in industry, and there are several industry initiatives dedicated to these same goals. The real tension here is between the scholars (in academia or industry or nonprofits or government) exploring these issues and the technologists who are deploying solutions thoughtlessly.
It ignores how academics are already being effective
The Op-Ed argues that academics are silent as industry forges ahead with the use (and abuse) of algorithms. In fact, researchers have made amazing discoveries about what goes on inside black box decision-making procedures, despite having limited access to the data associated with these procedures. Moreover, academics have led the way in developing fair and transparent algorithms that can be used by industry or government entities if they choose to adopt them. By communicating their work to journalists, academics have also played an enormous role in shaping the public discourse around issues of algorithmic fairness and accountability.
The article also willfully overlooks clear evidence of researchers engaging with government and with industry on a variety of activities. Across the world, policymakers, regulators, practitioners, and advocates are working with scholars to forge a new consensus on how algorithmic decision-making fits into society at large, and what kind of governance mechanisms need to be put in place.
Finally, academics are actively training the next generation of practitioners to develop and apply algorithms with an attention to their social impacts. Science and Technology Studies, among other fields, has long addressed these topics, but new courses on ethics and fairness are increasingly being offered within computer science departments.
It ignores how far we’ve come
Dr. O’Neil complains that academic researchers aren’t speaking up about these issues. As a critique of computer science, her comment might have been apt in 2007, but just in the last several years there’s been an explosion of interest and research in aspects of fairness, accountability and transparency in the field. One data point is the rapid rise of the FATML (Fairness, Accountability and Transparency) series of workshops that have now culminated in the FAT* conference happening this February in New York City. In fact, there has been such substantial research output in this area that some venues have expanded to encompass not only machine learning but a wide variety of disciplines, including other sub-disciplines of computer science, statistics, social science, law, public policy, etc. Further, this work is earning accolades in mainstream computer science conferences, including best paper awards at ICML, UAI, and EMNLP.
We agree with Dr. O’Neil that the instances of algorithmic abuse far outnumber the success stories at this point. And we’ve committed ourselves to working in this area to improve this state of affairs. But it sends a very different message when the Op-Ed asserts that nothing is being done. There’s a tone of desperation and alarm that just doesn’t reflect the reality of how much is being accomplished. And that tone can lead to defeatism and perpetuate the false ideas of science being antithetical to society’s needs. While we have a long way to go, there is definitely room for optimism about the trajectory and pace of developments.