A picture may be worth a thousand words, but words frame a picture

TLDR: Visualization titles influence how people interpret, perceive bias in, and trust data visualizations. Our online studies show that the slant of a visualization title affected what people recalled as the main message of a visualization. Yet, a majority of the participants rated a visualization as unbiased despite the slant in the title. People were more likely to find information as biased when the title was inconsistent with their existing beliefs, showing a sign of confirmation bias. In light of these results, we need to increase our awareness of the manipulative power of visualization titles and the roles played by textual elements in the spread of visual misinformation.

Visualization experts have warned about the potential for visualizations to mislead. Examples include heatmaps with a rainbow colormap, bar graphs with a truncated y-axis, a poorly designed pie chart, and the list goes on. When you encounter a visualization, you may quickly scan for visual signs of honesty.

Does the y-axis start at 0? ✓

Is the color scale for a quantitative variable monochromatic? ✓

Do the pie chart percentages add up to a hundred? ✓

After checking off each item on the list, you may feel more assured that the visualization is encoding the data without distortion. But another important aspect of a visualization that we might sometimes overlook is the textual elements surrounding the visualization.

When people look at a visualization, they often spend the most time looking at the title. However, sometimes news headlines selectively present or even contradict the visualization. In the following visualization, the headline emphasizes a particular trend — the spike in recent months — ignoring the earlier data present in the visualization.

https://www.nytimes.com/2019/03/05/us/border-crossing-increase.html

The headline in the physical copy of the article was even more extreme — “Record Numbers of Migrants Are Crossing Into U.S. Deluging Agents” — shocking some readers (who happened to be visualization researchers) before they examined the associated chart. Other readers may not be cautious enough to check the associated charts; they may simply not notice the discrepancy between the headline and the chart. For these casual readers, can textual components of visualizations mislead their visualization interpretation as visual distortions do?

Understanding the Influence of Visualization Titles

We evaluated how visualization titles influence people’s interpretation and trust of information through a sequence of controlled studies where we showed people the same graph with different slanted titles.

Slanted titles favor only one side of the visual story by emphasizing that section. For example, the following chart displays the U.S. military budget as a percentage of GDP (the blue line), and in constant dollars (the orange line).

Here are three (out of many) potential titles for this visualization:

Original published title: “Historical Defense Spending”

Slanted title A: “Defense budget on a steady decrease as a percentage of GDP over the past 50 years”

Slanted title B: “Defense budget on an increase in constant dollars heading towards $500 billion by 2019”

The original title provides a neutral and general description of the chart. In contrast, slanted title A emphasizes the decreasing trend. Although not directly stated, title A hints at the need to increase the military budget to account for the decrease. Slanted title B emphasizes the other side of the story by mentioning the increase in constant dollars. The implication may be that it’s time to slow down the budget increase. Thus, these slanted titles subtly prompt the readers to take a particular stance on the controversial topic of whether the U.S. military budget should be increased.

So let’s say Alice, a casual reader, sees the visualization with a title that states that the budget has been decreasing as a percentage of GDP. How would this influence her interpretation of the visualization? Would her interpretation change if she saw a different title that emphasized the increase in constant dollars? Would she think the information is biased or neutral?

To answer these questions, we conducted controlled studies where

  • We asked people for their attitudes on 6 controversial topics.
  • We showed people data visualizations covering two of the topics (the acceptance of Syrian refugees by different countries, and the U.S. military budget over the years). We designed the study to make sure each participant spent sufficient time reading the visualization and saw titles with different slants on the topic.
  • People recalled the main message of the visualization and rated the neutrality of the information.

Finding: People recall the message in the title as the main message of the visualization.

According to our results, the contents of the title influence what people perceived as the main message of the visualization. When asked to recall the main message, people who saw the decreasing title wrote things like:

“We need to spend more”

“Our per capita spending on military is much less than in prior decades.”

“Percent of defense spending as opposed to GDP since the Korean War”

These answers reflected the title’s focus on the “decrease” and “a percentage of GDP.” On the other hand, people who saw the increasing title were more likely to write messages such as:

“How much was spent in 2015 $ on each war.”

“We are spending a LOT more now that Trump is president”

“Military spending has grossly increased.”

“that the budget of the US for military has been steadily going up”

Note that the answers now mention the “increase” of the budget in constant “2015” dollars over the years.

So while people saw the same visualization, some stated that the military budget had increased and others that it decreased, depending on the title of the visualization they saw. And that is the power of slants in titles. The slant led people to recall totally different messages from the same visualization by emphasizing different sections of the visualization.

Finding: People reported visualizations as neutral despite their slanted title.

Were people aware of this influence of titles? The chances were low: 42% of the people wrote that they could not remember the title at all at the end of the study. One even said

the title was not important enough to save to memory. [The] graph [is] more important.

But that same person quoted the title almost word for word when asked to write the main message of the visualization. So although titles played a powerful role in visualization interpretation, the common impression was that the visual component was the important part of a visualization. It’s in the name “visualization” after all. This focus on the visual components also led to unawareness about the potential bias introduced by the title.

People view data as objective facts. This trust in the data, and perception that it was impartial, didn’t appear to be affected by the slant in the title. The top reason people provided for rating the information as neutral was that it was presenting “facts and statistics.” Sure, the title selectively highlights one trend in the visualization. But if that trend is still supported by data, isn’t it still objective?

“It’s fact. How can it be biased.”

“just shows the facts with no real commentary either way.”

“Number[s] do not lie, the graph is what it is.”

“The information is not telling us what to do, or what to think. It is just listing facts.”

Another reason for not rating the information as biased was not having enough prior knowledge on the topic to judge whether it was biased.

Finding: Perception of information neutrality persists even when the title contradicts the visualization.

To examine how strong people’s trust in data visualization was, we modified the visualization to make the slant in the title more obvious and surveyed 200 people with the modified visualizations.

Now, the data associated with the title message (i.e., the increase in constant dollars) was completely removed from the chart making it a baseless claim.

Surely, we thought, people would begin to notice the discrepancy and question “Hey, the title is unsupported by the data! What’s going on?” Surprisingly, few people did. 72% of the people we surveyed still found the information as neutral even though the message of the title flat out contradicted the message in the visualization.

Finding: People are more likely to notice bias in visualization titles that don’t align with their existing attitudes.

Although in general, the people in our studies were unaware of the bias in the information, they were more likely to call out the bias if the title did not align with their existing attitudes, suggesting confirmation bias. Confirmation bias occurs when a person interprets information in a way that is partial to the person’s existing beliefs.

Suppose Alice is against increasing the U.S. military budget. According to the theory of confirmation bias, she is more likely to accept information that opposes any increase in the budget (i.e., attitude-consistent information) at face value, and more likely to find faults or bias in the information that supports an increase in the budget (i.e., attitude-inconsistent information).

So if Alice sees an attitude-inconsistent title that states that the budget has been decreasing, would the mismatch between her existing attitude and the title affect her trust and interpretation of the information? Is she more likely to think that the information is neutral if the title is consistent with her existing attitude? Is her interpretation less influenced by the title if the title is attitude-inconsistent?

We found that the title had a strong influence regardless of whether it supported or opposed their prior attitudes. In other words, people still recalled the title message as the main message of the visualization even if the title was attitude-inconsistent. However, people were more likely to report information as biased if the title was attitude-inconsistent. While only 3% of the people who saw an attitude-consistent title reported the information as “Very Biased,” 12% of the people who saw an attitude-inconsistent title reported it as “Very Biased,” showing a potential confirmation bias in visualization interpretation.

One thing to note is that this confirmation bias seems to be based on the title and not the visualization. In other words, people are more likely to dismiss the information as less credible when they disagree with the text than when they disagree with the visualization. Perhaps people see text as something that could be biased and manipulated while they see visualization as the objective truth that they have to accept even when it’s inconsistent with their beliefs. We also saw that the perceived credibility of the data, visualization, and title decreased when the visualization and the title were misaligned. So fortunately, people do seem to pick up potential signs of misinformation when the signs are obvious.

Implications

Why does all this matter? Because we have to understand visual misinformation in order to prevent it. More people are sharing visual information online than ever, and our study results reveal that slanting the text is a powerful way of introducing bias without people necessarily noticing. The slant could be introduced by the original creator of the visualization as well as people who contribute their personal judgments while sharing the visualization. And all of this can ultimately lead to the spread of visual misinformation that many people may still interpret as neutral.

These results also call for more attention to the often-overlooked textual elements of visualization by providing an example of how a combination of visual and textual components affect people’s interpretation and trust of information. Other types of visual/textual interplay and visual/audio interplay are starting to be explored, and we look forward to seeing more research that explores the interplay of different modalities of information.

This post is based on our papers at ACM CHI 2018 (“Frames and Slants in Titles of Visualizations on Controversial Topics”) and ACM CHI 2019 (“Trust and Recall of Information across Varying Degrees of Title-Visualization Misalignment”) by Hidy Kong, Leo Zhicheng Liu, and Karrie Karahalios. If you’re curious about the study details, data, and other materials in this work, please visit our project website.

--

--

Hidy Kong
Multiple Views: Visualization Research Explained

Assistant Professor, Computer Science at Seattle University. Data Visualization, Assistive Technology, Healthcare — http://www.hidykong.com/