Between human and artificial decision-making processes

Sabrine Hamroun, PhD
8 min readJul 16, 2024

--

Photo of Andy Kelly on Unsplash

Have you interacted with an artificial intelligence today? The answer is likely yes, and many times over. If you have been on social media, read or wrote an email, checked your weather app, listened to music, or booked a taxi online, then you did interact with an artificial intelligence system. The same goes for this article: unless someone sent you a direct link, you probably found it through a recommender system, a type of AI. One milestone after another, artificial intelligence has become a regular part of our daily lives, even shaping the way we live. Interactions between humans and AIs have known many iterations, some of which will be explored in this article.

I- Human and artificial decision-making: first cousins or distant relatives?

1- When AI research meets cognitive science

Artificial intelligence is considered to have been founded after the 1956 Dartmouth Summer Research project, during which several scientists and mathematicians brainstormed how to create a machine that could think like a human being. Since then (and even before), human and artificial decision-making have long been topics of interest for researchers in multiple fields. Herbert Simon, a Nobel-prize winner scientist, introduced the concept of bounded rationality. His theory states that, as decision-makers, human beings are limited in their rationality and tend not to choose the optimal option but instead the sufficient one in its context. Simon himself was among the first pioneers in the artificial intelligence field. He co-created the Logic Theorist, the computer program considered to be the first AI. Based on a search tree and logical processes, the Logical Theorist was able to prove 38 mathematical theorems.

2- Biological and artificial neural networks

The biological structure of the brain has been an inspiration for artificial intelligence development. For instance, the work of two Nobel prize-winning neurophysiologists, Hubel and Wiesel, on the visual cortex of cats inspired the structure of Convolutional Neural Networks (CNN), a type of neural network widely used in image processing for purposes including predicting an image’s content. Their research showed that when presented with various lines, the visual cortex’s cells would fire differently based on their inclination, angles, and colors. CNNs use filters for pattern recognition, i.e., they allow specific neurons to recognize particular patterns (edges, angles, etc) within blocks of pixels.

3- Evaluating artificial intelligence vs human intelligence

People have long been interested in creating a system that can think like a human being. But the initial question at the heart of this project, “ Can machines think?”, as controversial as it is complex, was substituted for simpler ones, such as: “ Can machines do what we (as thinking entities) can do? “ It was to answer the latter that Alan Turing proposed what later became known as the “Turing test,” a test performed to evaluate whether people could distinguish between human-generated and machine-generated discussions.

Another way to evaluate an artificial intelligence’s “rational” decision-making is presented in this paper. Its authors wanted to determine whether different versions of Large Language Models (LLMs) would be prone to the same irrational decision-making as humans. They presented the algorithms with a set of cognitive tests widely used to evaluate human decision-making, such as the cognitive reflection test, a psychological test used to assess a person’s capacity to override incorrect “gut” responses and engage in further reflection. The results showed that “models of the ‘Da Vinci’ family (GPT-3 and GPT-3.5) displayed signs of bounded or heuristic reasoning, and generally under-performed human participants. On the other hand, more recent models (ChatGPT and GPT-4) displayed super-human performance” when compared to a sample of human participants, as reported in the paper.

II- Human-AI interaction: the solution for debiased decision-making?

1- AI can help us overcome our decision biases

AI has been applied in different fields as a promising tool to help decision-makers make better decisions. In other words, this technology can help us make objective decisions based on interpreting data from the environment and, therefore, might assist the human decision-maker in surpassing any heuristics in the decision process.

Let’s consider the medical field as a use case. Having to process a lot of information and perform several tasks each day — sometimes in parallel — doctors can get mentally exhausted, and their analysis can be biased. For instance, they can be subject to confirmation bias. This bias occurs when we select and interpret information in a way that confirms our beliefs.

This can lead doctors to misinterpret medical analysis and focus on elements that confirm their chosen diagnosis. Several AI-based solutions have emerged to help with medical diagnosis. With image recognition, classification, and Natural Language processing, to name a few, AI can analyze different medical data points in an attempt to bring a more “objective” medical diagnosis.

2- But AI can be biased itself

Although it has been said that AI could perfect the decision-making process as an objective and efficient decision-maker capable of analyzing an enormous amount of data in a short time, everything is not as perfect as it sounds. In fact, artificial intelligence can be biased itself.

An AI model can be biased through the dataset used for its training, e.g., when the dataset is unbalanced between its predicted classes. Consider a model aimed to predict if a consumer will churn (stop doing business with the company) or not — here, the model needs to predict a binary variable, with 1 if churn and 0 otherwise. A data scientist will train such a model on the history of data available for each client at each time step, indicating whether the client churned or not. By design, the training data contains more zeros than ones, following a customer every day from their subscription day till their churn day. Outside of this churn day, all other days will be flagged as zeros. And if there are many more zeros than ones, i.e., if the model could barely perceive and learn any churn activity behavior, it would struggle to predict when the churn would happen — it might even never predict churn behavior for anyone at any time.

As solutions exist to help surpass human cognitive biases, there are also good practices to apply to overcome AI biases. One way to debias the algorithm in the example above is by balancing the data, e.g., by selecting only a few observations where the target variable is set to 0 or by creating synthetic observations where the target is set to 1.

The bias can arise from other aspects as well: the model can be too complicated, leading to overfitting the training dataset, meaning that it learns a lot of specificities about it, “noisy” aspects included, and cannot be expanded to unobserved behaviors. Model regularization is a technique that helps prevent overfitting by penalizing noisy variable weights in regression models and by dropping out or shutting down certain neurons during the training process of a neural network.

3- If managed improperly, AI might enhance our biases

One problem with data-based bias is that it can reflect human biases and stereotypes. The model can learn to interpret the world as we do and document it. Natural Language Processing is the machine learning technique for processing and interpreting written content. This technique uses embedding, i.e., transforming words from a document into vectors interpretable by the machine. Once presented with vectors, the machine can learn relationships between different words, creating different contexts. For example, it can learn that Paris holds the same significance for France as Tokyo does for Japan. Word2vec is one popular algorithm that creates such embeddings. In this paper, researchers used this algorithm to embed a corpus of 3 million words from Google News. Results showed gender-related job bias in the embedding, as shown in the figure below.

The top extreme jobs related to a “he” or “she” gender by the algorithm. Figure from the cited paper.

‍In other words, the algorithm linked certain jobs to specific genders. If used in a raw manner, such trained algorithms could further enhance gender disparities, for instance, by proposing unequal job offers for people looking for new opportunities. This paper and much of the literature propose techniques to handle human-based biases and stereotypes detected in the algorithms, such as identifying the variables (or dimensions, in the case of the embedding) that capture bias (like gender bias) and eliminate them. Another technique mentioned in the paper cited above is to equalize the gender-specific dimensions, i.e., creating a new dimension of word embeddings that is gender-neutral. Such checks must be performed regularly, especially in sensitive fields, to avoid generating content enhancing these stereotypes.

Another type of artificial intelligence that humans frequently interact with is recommender systems. As their name implies, recommender systems are algorithms used to recommend new options for a user based on previously consumed content. For instance, on a music platform, based on what I have been listening to as well as my profile, specific titles from the enormous amount of available tracks will be presented to me, likely different from the suggestions another user would get on the same platform.

The same goes for social media — suggested content fits one’s interests and, therefore, boosts one’s engagement with the platform. However, being constantly exposed to content that fits one’s interests might raise a bias issue, as it could enhance confirmation bias. As mentioned earlier, this bias occurs when we select and interpret information in a way that confirms our previous beliefs. The concern that can be raised here is that as we consume content that fits our prior interests, recommender systems would propose content that aligns with these interests, and though the algorithm might suggest random content for the user to explore, the latter might tend to ignore them to consume the content that fits their prior biases exclusively and therefore reinforces them. This effect is known as an echo chamber. Interestingly, the echo chamber effect is the object of fierce debates, as opinions vary on whether the research dedicated to this phenomenon is strong enough to confirm its existence… And whether such behavior is due to the algorithm’s preconceptions or to the very nature of human decision-making.

This article concludes our series of articles exploring the decision-making process. We hope you enjoyed your reading and that it helped you understand both how complex and fascinating decision-making can be, and how impactful decision science can prove as well. If you want to read the other articles, please check the links below:

1- How does the context impact our choices?

2- Are you a rational human being? Let’s see what cognitive science says about it

3- Are you a risk taker? Long answer short: it depends!

4- Nudging: the mechanisms of influence

5- Why do people “harm the environment although they try to treat it well”? Sustainability analysis from a cognitive science perspective

Don’t hesitate to subscribe to our Tea O’Clock newsletter to keep up to date with the release of new articles and the latest market trends → the fifty-five newsletter.

Originally published at https://www.fifty-five.com.

--

--