Which is easier to correct, an algorithm’s bias or a human’s?

Enrique Dans
Enrique Dans

--

A fascinating New York Times article, “Biased algorithms are easier to fix than biased people”, explores growing concerns that many of the algorithms used to assess job candidates are skewed against women, reflect preconceptions in dating apps, and are slanted against certain groups in relation to criminal profiles, health care or advertising.

Algorithms, by their nature, offer no guarantees against bias, and this assertion should be pretty obvious to everyone by now. Specifically, this bias comes from the data we use to train them, which may well contain implicit prejudices or preconceptions of various kinds, some of which are not so easily spotted. The biases in the data created by algorithms are, however, relatively easy to spot, verify and correct using mathematics, and are similar to those used for many years to determine or balance sampling biases.

People are biased, particularly in relation to certain topics: our decision-making process is often skewed in favor of one position or another, or reflects our ideas, upbringing and beliefs. While personal bias can also be identified using mathematical analysis, compared to the biases of machine learning algorithms, it is much more difficult to correct, and sometimes requires replacing individuals or introducing changes to the decision-making chain of command.

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)