The Ethics of Algorithms

Devin Kawailani Barricklow
(Re)Thinking Tech
Published in
3 min readMar 1, 2018
Photo Credit: TxDonor.com

Algorithms are often lauded as a means of effectively using data to make objective predictions — but many experts, like mathematician Cathy O’Neil, have found that this isn’t always the case.

Bad algorithms” are a real and serious issue. Depending on what data is used to create them and how they are created, algorithms perpetuate cultural biases. A Google image search for “unprofessional hair,” for example, yields almost entirely images of black women — this is because the search results are based on the ways in which Google users have clicked on search results over time. There are plenty of other ways that algorithms can negatively impact you in ways you might not realize — for example, if you don’t have a history of comparison shopping, a car insurance company can track that behavior and charge you more for car insurance (which is certainly not what you would hope for unless you’re the insurance agent).

A recent New York Times article by Bärí A. Williams, a legal and operations executive in the tech industry, has also addressed the intersection of algorithm technology and racial bias. Williams explains, “A.I. works by taking large volumes of information and distilling it down to simple concepts, categories and rules and then predicting future responses and outcomes. This is a function of the beliefs, assumptions and capabilities of the people who do the coding. A.I. learns by repetition and association, and all of that is based on the information we — humans who hold all the racial and often, specifically, anti-black biases of our society — feed it.”

Companies can even utilize knowledge of algorithms to game the system in ways that are extremely harmful — O’Neil gives the example of Volkswagen, which used an algorithm to trick emissions tests and hide the fact that their vehicles actually emit 35 times the legal level of nitrogen oxide. Given that not many people know about algorithms and how they work, the scam went undetected for five years.

O’Neil goes on to discuss the ways in which we should be dealing with illegal and otherwise shifty algorithms, stressing that we need more transparency in understanding how algorithms work and how they might be able to harm people. She states, “The current nature of algorithms is secret, proprietary code, protected as the “secret sauce” of corporations. They’re so secret that most online scoring systems aren’t even apparent to the people targeted by them. That means those people also don’t know the score they’ve been given, nor can they complain about or contest those scores.”

In the criminal justice context, recidivism risk algorithms have been found to be potentially problematic around issues of racial bias. In fact, the Center for Court Innovation has been working in this area and published an article titled “Race and Risk Assessment: Would We Know a Fair Tool If We Saw It?” in Perspective Online, Spring 2017 (located on page 46). O’Neil delves further into this problem in her article, “Recidivism Risk Algorithms Are Inherently Discriminatory.”

While it might be difficult to know what to do with this technology, there is no question that there should be more transparency in the making of algorithms, and people need to be willing to solve problems with them when they arise.

--

--