Visualization Research in Cognitive Biases: A Survival Guide for Engineers.

Warning! If you are an expert in Behavioral/Cognitive Psychology and relevant fields, I strongly advise you to refrain from reading this article. Repeated exposure to an oversimplified decomposition of human complexity may cause discomfort, intense sweating, and tachycardia. Or at least, read it at your own risk.

Research on cognitive biases explores fascinating psychological effects in the way we think, judge and remember. This research often underlines “mistakes” we make that are called cognitive biases. For example, we tend to favor information that confirms what we already believe, ignoring contradicting evidence; this is known as “confirmation bias”. Hundreds of other cognitive biases are out there (see more examples in the table 2 here). Visualization research is also interested in how users think of, judge, and recall a visualized dataset. So, it seems plausible that cognitive biases can affect the way we analyze our data. To figure out if this is true (and to be able to fix it), we need first a deeper understanding of what these cognitive “mistakes” are about…

Confirmation Bias

What is this article about? This article is a cheat-sheet. It will not detail for you all of the cognitive bias literature, but it will give you pointers to navigate through the resources yourself, without the need to be an expert. There are a thousand resources that list and describe cognitive biases. However, it is not always clear how to apply what we read to data visualization problems. Often, instead of deeply understanding what these biases are about and how to detect them, we end up distracted by the psychology jargon and the large diversity of views and approaches. This article is a survival guide to anyone who wants to understand the cognitive bias literature through the lens of data visualization.

Who is the Engineer in the title? I use the term Engineer to include everyone who does not belong to the category of psychology experts I mentioned in the beginning. So for this article at least, you do not need to have an engineering diploma to be an Engineer. You are an Engineer, if you are seeking a more practical perspective on the topic. You might be a computer scientist, a visualization designer, or everyone who has an amateur interest in the interplay between human rationality and visual analysis. You might be a Ph.D. student who works on an interdisciplinary topic or a practitioner who wants to create a visualization that detects or alleviates brain pitfalls. Or simply, if you are not sure what “construal”, “attribution”, “intrinsic incentive”, and “arousal” mean, keep reading.

Real vs. Measurable User Problems. To know if a medicine works, you need to test it on human beings. For that, you need quantifiable measurements (e.g., “The operation was successful, but the patient died.”). Visualization researchers usually search for a golden balance between designing for real-world data problems vs. addressing measurable data problems. Though we may envision users who can deeply understand and reason about their visualized datasets, as visualization researchers we are often restricted to tests that produce more clearly interpretable, quantitative results, like whether the user is able to derive a numeric value from a visual variable (e.g., the color, or size, or position of a mark). If instead we test a more complex data question (e.g., “Which visualization can help us identify the function of Alkaloid substances in living organisms?” ), it may be harder to interpret our findings and apply them to a slightly different question or domain.

While there are very good reasons why we test more abstract tasks (see why in Brehmer & Muzner’s paper), this sometimes comes at the cost of overlooking deeper tendencies that people have when reasoning. On some level also, when research focuses on clear cut measures of reading values from charts, it means we are missing out on some fun: the fun of teasing the inner workings of human reasoning. Or simply, the fun of answering a million dollar question: “Once our eyes successfully decode the information behind visual variables, what does our brain do with it?”

The fun part. Cognitive bias research can be seen by visualization researchers and designers as a sweet balance between real and measurable human reasoning. At first glance, cognitive biases come with a number of real-world challenges. In many everyday settings, people tend to make faulty judgments influenced by prior beliefs and stereotypes, stressed by uncertainty, or blinded with self-oriented viewpoints (for an Engineer-friendly description of biases see here Dimara et al. ). Factors such as the order in which we see things, the baselines we use as reference points, and the way we retrieve our memories and predict the future can often mislead us. Everyone can be subject to bias: educated, intelligent people or math professionals. At a second glance, cognitive biases discussed in the research literature come also as a package of rather mundane, highly stylized tasks. People in such studies usually have to do something simple, e.g., to choose between a risky, high-profit investment and a less risky, but less profitable one.

When the Engineer reads a pop-science account of cognitive biases (like the one you are reading right now), they may seem both fun and widely applicable. However, a pop-science explanation is somehow generic — not enough to deeply understand what the underlying tasks are about. So, our Engineer searches for the original source: academic papers which empirically observe a cognitive bias. And here comes the horror.

The horror. Have you heard the story of Odysseus’ 10-year journey from Troy to Ithaca? On the way, Odysseus and his men encounter the Sirens, mythical temptresses that lure men to their death with their voices making them forget their destination? This is what happens to an Engineer when first dealing with cognitive bias literature! Each cognitive bias paper is a lonely island; too bias-specific, without consistent terminology and cross-references to other biases. This becomes even worse for cognitive biases observed across different domains (e.g. social psychology vs. marketing research).

And then, there is a paradox. On the one hand, most researchers admit that no unified explanatory theory exists of why biases occur. On the other hand, the “why” elaboration of each paper is of disproportional length. The speculation in these sections can be like the Sirens that lured Odysseus and his men; seems interesting, but also likely to sidetrack one. Besides, if you take a courageous step back and try to question some of these theories, such speculation is sometimes a trap; almost impossible to falsify! For example, an unfalsifiable theory can sound like this: “Human decisions are irrational because we are stressed by the tiny, angry unicorns that live in our brain’s frontal cerebrum. Unfortunately, these unicorns are invisible and not detectable by any kind of scientific equipment or methodology”.

The Unicorn Theoretical Framework that Explains Cognitive Biases

How to deal with the cognitive bias literature (if you are an Engineer).

1. Start with the Engineer’s Mantra. Repeat in your head: “A cognitive bias is a problem and my job is to fix it ”. This is your destination, no matter how distracting is the sirens’ chanting all along the journey.

2. Skip The Rationality Debate. A great part of the cognitive bias literature focuses on the question “Are humans truly irrational?”. Some researchers saw cognitive biases as a proof of faulty information-processing (Tversky & Kahneman 1974, Gilovich et al. 2002) resulting in a Nobel prize and numerous applications (e.g., economics, medicine, law, political science). Other researchers argue that cognitive biases are reasonable adaptive mechanisms to optimize our cognitive resources (Gigerenzer 1991, ’96, ’04, ‘08); a rigorously documented framework by Gigerenzer and colleagues that states that most bias tasks are likely artificial. Assuming that rationality can often be context-dependent, we can translate this debate into Engineering language like this: In your system (or domain), is the human error an acceptable approximation of an OK response? And are users OK with making it? If yes to both, it works, no need to touch it. If not, we might need a tool to help with that. Don’t forget the Mantra. What you read as unfortunate “human limited resources” is the Engineer’s piece of candy; to build a magic external reasoning tool that can amplify our cognition.

3. Skip The “Why” Theory. A “why” theory is useful when it helps to understand which cognitive strategies people used to solve the bias task (read Jessica Hulman’s post on how important is for visualization research to try to figure out such “why”s). A “why” theory is also useful if proposes a comprehensive explanation that can predict the outcome of the bias experiment (see Steve Haroz’s post on how important is to avoid unjustifiable predictions ). If the “Why” Theory you are reading does not help you neither on the first nor on the second, feel free to skip it. It likely didn’t much help the authors either! You may want to also think how the “why” theory can be proved wrong. If it feels impossible (i.e. lots of invisible unicorns), feel free to skip it too.

I should confess though that I personally read all the things I advised you to skip: philosophical debates on human rationality and incomprehensible psychology explanations of why biases occur. Not because they have appeared particularly useful in my research, but because I secretly enjoy them. There is no hope for me, so run to save yourselves!

4. Do NOT skip the Study. When I read articles about cognitive biases written by Engineers, I sometimes feel that the authors were so overwhelmed by the steps 2 and 3 that they never made it to the experimental section of the original paper. So while the Engineer article extensively quotes the Debate and the “Why” Theory, its empirical section is usually very far from the actual bias description and its original experiment design. This often results in noisy findings and difficulty detecting the bias in the new context.

To avoid this trap, devote the 100% of your attention to the experiment section. All details are of great importance here: what EXACTLY people told to do, what they saw, how many times they saw it, what is their background knowledge on the task. The better the paper, the easiest for you to extract such information. What I observe in many psychology papers is that while documenting the procedure (e.g., what participants did and told), they might not precise what exactly participants saw (e.g., how the question format actually looked like). However, this information format is the most critical for a visualization researcher to understand.

5. Any data involved? After understanding the study, the first question you may want to ask is: Were there any data involved? Or could there be? Some bias experiments involve some sort of quantifiable information given to the participants, others not. The first ones are likely easier for you to start with. Keep in mind that identifying how a bias may be relevant to data visualization sometimes requires imagination. For example, there is a bias called the “identifiable victim effect”, where people choose to help a single one person in need, rather than larger numbers of abstractly described people. Inspired by this bias, visualization research has explored whether anthropomorphized data icons can increase a user’s empathy in donating (Boy et al.2017). A second question to ask is: Can you think of a way to extend these data to … more data? Most bias experiments usually refer to two or three numeric alternatives. This does not necessarily mean the bias will not occur with more data, but more likely that it hasn’t been tested because these studies do not use visualizations to be able to show them (see an example of how to extend a 3-datapoint cognitive bias to many data points in Dimara et al. 2016).

6. Play it safe. In order to fix a cognitive bias, you first need to be able to observe it. To observe it, you need to apply the previously observed human tendency to a visual analysis scenario that makes sense to you. It may be safer to inject such a scenario by keeping the other elements of the original study untouched as much as possible — leaving the only manipulated variable (the visualization) to be tested.

7. Improve Reliability. Some of the cognitive bias papers (especially the older ones) might not use up-to-date statistical methods. No need to stick to that, feel welcome to use more reliable methods (a helpful resource for transparent statistics here).

Fixing the cognitive bias

Now let’s assume for a moment that you managed to reliably detect a human bias in a data visualization scenario.

8. How to fix the bias. According to the Mantra, fixing the bias is the destination of your journey. However, when it comes to bias mitigation, most current cognitive bias research will leave you on your own. Human cognitive biases seem particularly resistant and empirical research has produced very little evidence that things can get much better. Some existing debiasing attempts focus on education (e.g., train people with statistics courses or video game simulations). Other attempts try to replace humans with automated systems (e.g., actuarial methods in medicine). Others try to change the environment of the human (e.g., by adding a checklist in a hospital). Researchers are yet debating: some claim that you simply can’t debias people, others are more positive, suggesting some strategies might work (e.g., differential diagnosis for doctors), and others are in between, saying that even if you can’t prevent a cognitive bias, you can at least recognize it and likely act upon it. At the very least, the jury appears to still be out on how people’s tendencies toward cognitive biases can effectively be changed.

Also keep in mind that unlike what a visualization researcher might expect, both opponents of the rationality debate, Kahneman and Gigerenzer, emphasize that more information does not necessarily yield better decisions. Therefore visualization interventions that mainly expose users to more data (regardless of how effectively these data are presented) might not be the best way to go.

Despite these challenges, there are good reasons why an Engineer shouldn’t give up. It is because visualization designers have more rabbits in their hat. First, they can reframe a question by altering visually the way information is presented to the user. Second, and most important, they can reframe the question by actively altering the way the users interact with the visual information (see an example of debiasing with interaction here Dimara et al 2019).

Introduce the Engineer in the room. Now if you are an Engineer who sensed a touch of sarcasm toward engineers’ utilitarian mentality, I have something to confess. Yeap! I am an Engineer myself. Now that you know that every line in this blog is written based on my own genuine pain dealing with cognitive biases literature myself, you may forgive my puns? If on the other hand, you are a Psychologist who sensed a subtle questioning of psychology theories, don’t get me wrong. What do I know after all? I am just an Engineer…

--

--

Evanthia Dimara
Multiple Views: Visualization Research Explained

I am a research scientist in Information Visualization. I focus on decision making — how to help people make unbiased and informed decisions alone or in groups.