Can Algorithmic Bias be Prevented?

Tobias
6 min readJul 7, 2019

--

The danger of algorithmic bias grows in lockstep with the exponential spread of algorithms. Algorithmic bias can affect us everywhere, from minor trivia such as our social media feeds to critical decisions where bias can wreak havoc with our life dream or our company’s survival. Often specific groups of individuals are affected — e.g., COMPAS, an algorithm US authorities use to estimate how likely a criminal will re-offend, has been found to exhibit racial bias; Google’s algorithms for picking job ads have been found to favor lower-paying jobs for female users. However, companies can suffer from algorithmic bias as well — e.g., one bank was almost bankrupted because the credit score it used had an inherent bias that triggered catastrophic underwriting decisions en masse.

The question is: Can algorithmic bias be fought and prevented? I believe so — however, only if it is properly understood and importantly, data scientists join forces with the business managers or government agencies who commission algorithms and design the overall decision processes that use algorithms. This is not how most people currently think about the problem.

First of all, algorithms are not evil. In fact, headlines about algorithmic bias often overlook that those algorithms replace human decisions that can be even more biased. For example, a number of research studies have shown that bail and parole judges show bias if the defendant is black or the judge is tired. Orchestras typically did not even consider female applicants prior to introducing the now common practice of hiding auditioning musicians behind curtains, thus concealing their gender. It is therefore worrisome when the new European General Data Protection Regulation sees the solution to algorithmic bias in the “human in the loop” that it requires whenever a decision has a “significant” or “legal” effect on an individual. I remember the overconfidence bias of a senior risk manager of a large, now bankrupt US bank that dismissed a soccer-field-sized warning flag waved by an algorithm about the risks hidden in home equity loans as an “algorithmic error.”

The truth is that for many decision problems algorithms still are the best available approach for making a fair decision — in fact, the statistical procedures calibrating algorithms are designed to be unbiased –, and that many techniques exist to contain or mitigate algorithmic bias. However, when algorithms learn about the world, they take the data fed into them at face value — and therefore they often still mirror the many biases of their creators, users, and society at large.

Where exactly do algorithmic biases come from? I distinguish six major sources, many of which consist of a plethora of different traps in which an algorithm can fall, and in my new book “Understand, Manage, and Prevent Algorithmic Bias: A Guide for Business Users and Data Scientists” I need 56 (luckily fun-packed!) pages just to explain them all, before getting into solutions. The following examples give you the gist, however: First of all, algorithms are developed based on data, and that data often is flawed — e.g., if someone accidentally deleted some data or an important part of it never actually got saved anywhere. Sometimes the amount of data available also is insufficient — e.g., if the data teaching an algorithm about the likelihood of criminals to re-offend contains a single case of a nuclear scientist, the algorithm may fall into the trap of believing that all nuclear scientists behave the same way (a problem called over-fitting). Other algorithmic biases arise from decision biases of the data scientist (e.g., if the definition of an outcome as good or bad itself is biased). Algorithms also expose an innate stability bias — in a way, they navigate through the world by looking in the rearview mirror; if recently tank tops with leopard prints were popular, an algorithm might assume the same for the future. More troubling, however, biases also can arise from interactions with users (e.g., your choices of items in your social media feeds on which you click). And many algorithmic biases simply reflect deeply rooted, self-fulfilling behavioral biases in the real world — which explains why statistical techniques alone cannot prevent or eliminate them.

The good news is that for all these problems solutions can be found — however, very often data scientists on their own cannot implement them. For example, what do you do if the only data you have is deeply affected by gender bias (think of pay or evaluation data)? The best outcomes can be achieved if managers and data scientists work together to collect truly unbiased data. This is an arduous task, however, that often requires carefully constructed experiments — e.g., if you want to build an algorithm to assess which attributes of a job applicant are most predictive of a particular skill, you must first design a failsafe approach to objectively assess this skill and if humans conduct part or all of the assessment, train them on the new approach. If evaluators mark down assertiveness of men as “leadership potential” but female assertiveness as “bossiness,” you have a problem — running a successful experiment to collect unbiased data will require you first to figure out how to get evaluators to assess assertiveness consistently without regard to gender.

In the end, any organization trying to combat algorithmic bias can identify a finite list of structured, pragmatic steps to go through in order to wrestle down the problem. Critically, however, this is not a cookie-cutter approach — instead, business users and data scientists need to start with a top-down, prioritized view of the biggest problems for a given decision problem and based on that identify the key practices to follow. As I demonstrate in my book, this can indeed be accomplished if you combine time-tested, “rule of thumb” type checklists with tools specifically designed to encourage fruitful dialogue between data scientists and business users. A case in point: Rather than making model documentation a dreadful, bureaucratic paperwork, I propose a series of open-ended questions that, if answered in writing along the model development process, create a Q&A style document that represents a complete and engaging documentation of the development of the algorithm that enables any layman reader to flag issues (such as wrong or biased assumptions) and understand limitations of the algorithm (e.g., situations in which the output of the algorithm should be relabeled “I don’t know” instead of whatever default value comes out of the statistical formula). How many times have you encountered an algorithm that in certain situations says “I don’t know”? That’s my point — today such pragmatic solutions somehow get lost in the gulf between data scientists and business users.

Some solutions are also completely outside of the realm of statistical theory. For example, if you know that historically decisions and outcomes in the real world have been shaped by deeply ingrained biases, you could literally adjust the algorithm to remove it — like brain surgery, this is not a casual task in which data scientists typically engage but very much a promising approach for difficult situations. In other situations, a deep understanding of the challenges at hand might lead to a business user deciding against the use of an algorithm altogether or designing a decision process around specific biases (often resulting in hybrid approaches where an algorithm is combined with human judgment or other criteria to engineer an unbiased outcome).

Artificial intelligence therefore is both a boon and a menace. By making algorithms faster and cheaper than ever before, machine learning creates unprecedented opportunities to take bias out of flawed human decisions (thus, say, saving lives if they help doctors to make better diagnoses). At the same time, poorly designed and monitored algorithms can introduce bias in ever more places, especially if the black-box nature of many algorithms shields biases from easy discovery, or if the apparently effortless way in which new software tools create new algorithms in an instant causes users to skip the efforts involved in detecting and removing algorithmic bias.

And not surprisingly, the first step in overcoming algorithmic bias lies in overcoming a human bias: The overconfidence of the creators and owners of algorithms who often believe that, in contrast to everyone else, their own algorithms somehow, magically, are able to avoid bias.

Tobias Baer is the author of “Understand, Manage, and Prevent Algorithmic Bias”, now available from Apress/Springer Nature. Click here if you’d like to find out more, or directly jump here to buy it.

--

--

Tobias

I am a quirky data science and risk management expert supporting Fintechs/start-ups as senior advisor and pursuing psychological research. >> tobiasbaer.com