Biased by Design

The dangers of cognitive and algorithmic bias

ThinkTech Seminars
ThinkTech
7 min readApr 26, 2021

--

Alexia Tefel-Escudero

Daniel Kahneman’s book “Thinking, Fast and Slow” offers its readers a comprehensive analysis of the inner workings of our mind, and an approach to understanding our decision-making process. As we design technological applications of Artificial Intelligence, it is worth thinking through our own biases and how they are replicated by algorithms.

Decision Making and Algorithms

There are many perspectives from which Thinking, Fast and Slow can be analysed. But one of the questions that I believe will have the most significance in our lives is that of the relations we can derive from our way of thinking and the design of algorithms for systems of Artificial Intelligence, Deep Learning, and Machine Learning.

For instance, we know that human beings are biased in many ways — even in ways that we are not necessarily aware of. People tend to think that technology can provide a viable path to making unbiased decisions, but the reality is that computers are not exempt from the biases of the people who created them.

Algorithms are already being used to make many major decisions about our lives such as who is hired for a job, who gets a loan from a bank, and even who gets arrested and for how long. So, if these algorithms are biased or flawed in any way, it could actually amplify injustice and inequality instead of contributing to making impartial decisions. For this reason, it is important to understand our own biases, how these are transferred to Artificial Intelligence, and how we can prevent algorithmic bias.

A Peek Inside Our Minds

We are constantly making decisions, every day. It would be exhausting to have to carefully ponder on each and every one of these decisions, so it makes sense for us to develop shortcuts to make our decision making easier. These shortcuts are known as heuristics, and Kahneman introduces them in the second part of his book, along with the concept of bias. Heuristics and biases provide basic rules of thumb for decision-making. Basically, what happens with most of the decisions we make is that we follow our immediate, intuitive answers, and become prejudiced to always follow these rules. Therefore, the shortcuts we take can easily turn into biases that affect our preferences and ways of thinking. There are different types of biases. In this article, I will focus on three cognitive biases introduced by Kahneman.

Photo by mahdis mousavi on Unsplash

The first one is confirmation bias, which states that we seek out information that validates what we already believe. As a result, we do not look for objective facts, we only remember details that uphold our beliefs, and we ignore those details that challenge our beliefs. For example, if a person believes that left-handed people are more creative than right-handed people, this person will look for evidence that proves this belief and easily accept any example that confirms it. It is important to note, however, that even though we tend to follow these patterns of thought, we are not determined by them, since it is possible for us to overcome our biases.

The second type of bias is related to cognitive ease, which has to do with the notions of correlation and causation. These are statistical terms that help us understand conclusions and allow us to infer links between an event and its possible consequences. These concepts are easily mistaken, so it is helpful to explain each one before moving on.

  • On the one hand, correlation tells us that two things are happening at the same time. This simply states the existence of a relation between two variables, but it does not explain the reasoning behind it. For example, on days when I go running, I notice more cars on the road (as opposed to when I stay at home, where I don’t see any cars).
  • On the other hand, causation states that the change in the value of one variable will cause a change in the value of another variable. In other words, one thing causes another (e.g., after exercising, I feel physically exhausted). This is also known as “cause and effect.”

With these concepts in mind, we can understand that correlation does not imply causation. To illustrate this with a previously mentioned example, we can say that just because the top student in a class is left-handed, this does not mean that left-handed people are smarter than right-handed people.

The third and last type of cognitive bias analyzed here is the illusions of truth. Also known as the “illusory truth effect”, this bias describes how a reliable way to make people believe in falsehoods is frequent repetition. Because familiarity is not easily distinguished from truth, the things that we are exposed to repeatedly feel truer, and we therefore become biased to believe these things over other options that do not seem as familiar.

Even if we are not aware of these biases, they dominate our thought processes and guide our decision making. Still, this does not mean that biases are bad; all this tells us is that our brain is taking shortcuts by finding patterns in data. Rather, what we should focus on is in what type of situations it becomes convenient to overcome these biases and engage in deeper critical thinking. Biases can turn into bad practices when we do not acknowledge exceptions to patterns, and when our biases lead to unjust behaviors towards others (such as racial discrimination). Therefore, it is important to distinguish bias, which we all have, from discrimination, which we can prevent.

Picture a Shoe

Bias also takes place in AI systems. This can be illustrated with a simple mental exercise. Before you continue reading, close your eyes and picture a shoe. Just a shoe, the first one that comes to mind.

Done? Did you think of anything similar to these options?

We might not realize it, but each of us is usually biased towards thinking about one type of shoe over other types. Thus, when we are training a computer to recognize a shoe, we might end up exposing it to our own biases. Just because something is based on data does not really mean that it can be automatically made neutral. Our human biases become part of the technologies we create in many ways.

Three of the most common types of algorithmic biases are interaction bias, latent bias, and selection bias:

  • First, interaction bias refers to how users can bring bias into an algorithm through the ways they interact with it. For example, a study by computer scientist and algorithmic bias researcher Joy Buolamwini examined facial-analysis software and discovered an error rate of 0.8 percent for light-skinned men and 34.7 percent for dark-skinned women. This is because the algorithmic model was offered less information about different skin colors by its programmers.
  • Second, we see latent bias when an algorithm incorrectly draws correlations of ideas with gender, race, income, and other sociocultural factors. An example of this is when you search “nurse” on Google and the images you see are mostly of women, even though we know there are male nurses as well.
  • Lastly, selection bias occurs when the data used to train the algorithm is not representative of the entire population, and so it operates worse for the underrepresented compared to others. We were able to see this when Google Photos’ image-recognition algorithms misclassified black people as gorillas in 2015.

Monitoring AI systems for biases and discrimination is a huge responsibility, especially due to the many ways technology is affecting our everyday lives. One step we must take to examine algorithmic bias is simply to recognize that AI will, in fact, be biased. Also, in order to have less biased algorithms, more representative data is needed to train them. Being aware of these facts will allow us to be critical about AI recommendations instead of blindly accepting them without a second thought.

The Extent of Our Ignorance

Melvin Kranzberg, a specialist in the history of technology, developed a series of “laws of technology” that came to be known as Kranzberg’s Laws. They essentially explain the interactions that the development of technology has with sociocultural change. The first of these laws is especially relevant here. Kranzberg said that “technology is neither good nor bad; nor is it neutral.”
What this entails is that, while we are the ones who build technology, it is reciprocally shaping us as well.

Most probably, we will not get unbiased algorithms or AI models until something similar to Artificial General Intelligence is created. Some say that this outcome is inevitable. Still, whether it happens or not, the current development of AI serves to spark even more questions about what it means to be human in this day and age. Some of the most pressing concerns may be, for example, how our thought processes and decision-making capacities will evolve as we leave a good part of the thinking to machines.

Nevertheless, there is something unique that distinguishes humans from these technologies. Because we learn from experience, we are building our own datasets. And one of the most amazing abilities we have is that we can expand our knowledge by understanding different points of view. When something is out of the ordinary, we do not just dismiss it as erroneous; we adapt our way of understanding that element and build on the model we had. What makes us human is that we are capable of knowing the extent of our own ignorance.

Alexia Tefel-Escudero studies Philosophy, Politics and Economics at the University of Navarra | LinkedIn

--

--

ThinkTech Seminars
ThinkTech

Somos una comunidad de universitarios de distintas disciplinas. Escribimos sobre tecnología y su papel en el presente y futuro de la sociedad.