Signal #1 — “Make ‘Fairness by Design’ Part of Machine Learning”

Nicholas LiCalzi
Civic Analytics 2018
2 min readSep 11, 2018
Image result for algorithmic fairness
Who wants to be judged by this thing?

Thanks to many recent missteps by leaders in the sector, the public has grown ever more skeptical of the tech industry and its various utopian claims. The disciplines of AI and machine learning have come under great scrutiny of late, with detractors challenging their professed benefits, transparency, and legitimacy — Mayor de Blasio signed a bill earlier this year establishing a task force dedicated to the review and revision of the city’s decision-making algorithms with an eye for ensuring their fairness and neutrality in matters of economic status, race, and other protected classes.

A group of scientists have published a piece in the Harvard Business Review making the case for a more widespread adoption of algorithmic review to counteract a number of biases they’ve identified as potential detractors to equitable machine learning research. Sampling bias, performance bias, confirmation bias, and anchoring bias can all skew the value of any research performed, while algorithms obscure the involvement of and legitimize any biases inherent in the researchers carrying out the work. Their suggestions, with which I agree vehemently, include pairing data scientists with social scientists who are domain experts and prepared to critique assumptions built into models, along more specific technical guidelines for ensuring research is de-biased.

I think all of us at CUSP, and more generally all data scientists, would benefit from striving to have our work reviewed at all phases of a project as a form of “sanity”- and fairness-checking, and failing that, make our work legible, reproducible, and critique-able by outsiders in the interest of ensuring that our methodologies are sound and our findings accurate.

--

--