The Curious Absence of Uncertainty from Many (Most?) Visualizations

All I want for Christmas is some error with those estimates

Jessica Hullman
Dec 23, 2019 · 10 min read

Consider the last few visualizations you encountered outside of scientific publications, whether in a news article backed by data, in researching a product, or in using an application to plan a trip. Did the visualizations depict uncertainty–the possibility that the observed data or model predictions could take on a set of possible values?

A chart from the U.S. Fed’s 2019 report implies that the public holds exactly 77.64% of the the U.S. nominal GDP (which is itself an enormous number typically reported without error). When uncertainty is withheld, visualizations imply unrealistic precision that viewers may or may not question.

Everybody’s talking about it, nobody’s doing it

Chances are, they did not. Out of curiosity, I gathered data visualizations from nearly all articles published online in February 2019 by leading purveyors of data journalism, social science surveys, and economic estimates. Of 612 visualizations, the majority (73%) presented data intended for inference, such as extrapolating to unseen or future data rather than simply summarizing exactly what happened in the past. Yet only 14 (3%!) portrayed uncertainty visually. And that number includes visualizing raw data to convey variance, in addition to more conventional representations like intervals (error bars) or probability densities.

I wondered. Uncertainty communication is an increasingly hot topic in research and practice. As a visualization researcher, I’m well aware of the many techniques that have been proposed for representing uncertainty. There are the standard, STATS 101 class representations of probability via intervals (confidence, standard deviation standard error) or plotted density functions (summarized here). Then there are newer techniques that apply a frequency framing (e.g., 3 out of 10 rather than 30%), which psychologists like Gerd Gigerenzer have shown can improve Bayesian reasoning, to visualizations. Some of these, like hypothetical outcome plots (HOPs), can be applied to pretty much any visualization so long as you can generate draws from the distribution you want to show, leaving little room for excuses for omitting uncertainty from already complex visualizations.

So I began talking to visualization authors to find out more. I wanted to know how they thought about theof uncertainty or error information in presenting estimates, and they presented it, where they to calculate or convey it, and how they perceived or around uncertainty in data visualization. First I surveyed about 90 professionals — from journalism, industry, research and education, and the public sector — who create visualizations as a regular part of their job.

I also did more in-depth interviews with some really smart people: 13 visualization designers, graphic editors, and general influencers in the world of data visualization whose work I personally respect.

Contradictions between design aspirations and practice

What did I find out? That authors face challenges in, in addition to calculating and visualizing, uncertainty; that they sometimes choose visual encodings to convey“fuzziness” uncertainty (circular area, for example, which is hard for people to accurately read); that they worry about confusing audienceswith uncertainty or worse, For more details like these, check out the IEEE VIS 2019 paper.

What I want to discuss here is one particular tension that I observed again and again in responses. That tension consists of two facts:

  1. Most authors responded positivelywhen asked about the value of conveying uncertainty in visualizations. They understood that uncertainty was beneficial to viewers for making decisions and calibrating their confidence. Some described conveying it as an author’s responsibility.
  2. Most authors confessed to NOT including uncertainty in the majority of the visualizations they create.

Omitting uncertainty is a norm. Here’s why that might be.

If most authors are positive about uncertainty, and agree it should be visualized more often with data, why aren’t they doing it? I listened closely to what each author said, as they described what gets in the way of conveying uncertainty, why they think others might not be doing it, and how they perceive their audiences and their own responsibilities as data mediators. Through their responses, I glimpsed a system of beliefs behind omission.

Premise:

Tenet 1:

According to many of the authors I surveyed and interviewed, the function of a visualization intended for other people to consume is to convey some “message” or “signal”. As one graphics editor described, the goal of their team was to focus on “one key takeaway”, with a few sub-messages possibly folded in. Multiple others talked about using the design process to tune the visibility of signal to viewers.

The way that authors talked about the message or signal implied that it had a truth value that could be objectively defined. Where does this truth value come from, I wondered?

Tenet 2:

When asked how they knew a signal was valid, authors typically alluded to their process, saying things like “well, we vet things.” In a few cases they referred to hypothesis testing, but more often alluded to exploratory data analysis and discovery through graphing and inspecting the data.

And not only did authors trust this process, but many implied that their viewers did as well. “,” explained one interviewee when I asked how audiences knew the message in a graph was valid. When asked if they had seen audiences asking for uncertainty communication in data scientists’ presentations, another interviewee responded “

Tenet 3:

Many interviewees and survey respondents suggested that uncertainty was separate from the message or signal of a visualization. Because it was separate, uncertainty was capable of challenging the signal, of obfuscating it. Uncertainty could compete with the signal for the viewer’s attention, or confuse them. If error was too large, it could threaten the validity of the signal statistically. Finally, uncertainty could signal a of credibility given the current norm of omission. A survey respondent described how their work “” An interviewee described how adding uncertainty and other methodological information could be “” to a viewer: “

Escaping a bad equilibrium

Summarizing what I found as a small set of model tenets was somewhat satisfying: it provided a concise, if initial, attempt at describing a problem that I believe more researchers (in computer science, economics, psychology, …) should be thinking about. But, my inner logician remained unsatisfied. I started down this rabbit hole because I sensed that visualization techniques alone would not solve the problem of “incredible certitude,” as economist Chuck Manski has called the absence of uncertainty from many government reports. Given the pervasiveness of beliefs that support a norm of uncertainty omission, what hope do we have of overcoming the “bad equilibrium” we’re in? I needed to explore avenues for escape, if only for my own peace of mind. Specifically, I began looking for reasons why visualizing uncertainty was still the right answer, even in light of a norm driven by authors’ pragmatism.

Solution: Formalizing how graphs communicate

The authors I talked to seemed clear on one thing — visualizations are meant to convey messages or “signal.” But can we formalize exactly what signal means, and how it is communicated in a visualization, so that we can consider uncertainty’s role in this process more rigorously?

Visualization as test statistic

Luckily, some statisticians have thought about this a bit. Andrew Gelman, Andreas Buja, Dianne Cook, and others have proposed that when a person examines a visualization looking for patterns, they are doing the visual equivalent of a visual hypothesis test, or more generally a “model check”.

The nice thing about this analogy is that it can guide how we think about the mechanics of visualization judgments, both under uncertainty visualization and without it. After all, a model check has a specific definition in statistics. For example, if the viewer is engaging in a model check akin to a visual hypothesis test, we need a null hypothesis specifying what the absence of a pattern looks like, a set of assumptions about how the data was generated, a test statistic summarizing how extreme the data are in light of the null hypothesis, and a criterion for judging that extremeness (like the conventional p-value less than 0.05 threshold in Frequentist statistics).

Imagine an author presents a chart called “Ireland’s debt grows quickly”, implying that the change in debt-to-GDP ratio for Ireland is different from that of the other countries shown. We can think of the visualized data as a a summary from which we can judge the data’s extremeness in light of some model’s predictions.

Now think for a moment about the hidden () mental process the viewer might go through when judging the data in the chart. A model of statistical graphical inference proposes that the viewer mentally compares the visualized data to an imagined representing the predictions of some model¹. But what model, fit to what data?

Here, we’re on our own, as no statistician can tell us exactly what magic happens when all of the prior experiences and chart reading habits of a person collide with a visual representation of data that they consume. And yet decades of research in visualization, not to mention cognitive psychology, have implied some commonalities in what people conclude from graphs. It seems safe to assume that at least among some viewers, graphical inference processes will be similar.

We might guess, for example, that a viewer imagine taking draws from some sort of linear model fit to all of the other lines besides Ireland in the chart, kind of like a simulated cross validation. In imagining these simulated trends, which you can picture as a kind of envelope surrounding the lines other than Ireland, the viewer compares them to the trend for Ireland, noting the discrepancy in slopes. Just like a conventional hypothesis testing scenario involves computing a p-valueand applying a threshold to it to judge whether a difference qualifies as statistically significant or not, a visualization viewer might apply a mental discrepancy function with an associated internal threshold on how much “pattern” is enough to conclude a difference exists.

Uncertainty visualization can help authors communicate a message

Why is this analogy to model checks or hypothesis testing useful? A more principled notion of the form a visual judgment takes allows us to more rigorously consider how changes to how a visualization is conveyed (including ) result in different interpretations. The question that, if answered, could hold the key to the problem of uncertainty omission is

Well, we know that visualization authors care a lot of about getting their message, or signal, across. That’s why uncertainty is perceived as so dangerous. And yet, according to the graphical inference model, a viewer recognizes a signal in a visualization by hypothesizing a reference distribution, a distribution constructed by . Whether or not this process is conscious, a viewer cannot imagine a model without acknowledging probability (or likelihood). Seeing what other values the visualized data might have taken —whether through a conventional or more novel visual representation of uncertainty — provides directly relevant information for guiding the viewer’s inference. In theory, we might expect this to lead to more consistent inferences across users.

Why would an author be content to let the viewer mentally “fill in” the gaps, when they could exert more control over this process by representing uncertainty explicitly? My conversations with authors led me to believe that most are quite sensitive to the ways that small decisions in design impact the mental “filling in” or conclusions of their viewers. Often this occurs through considerations like how to scale a -axis, or what data to include for context.

What may be less obvious though, is how visualizing uncertainty can help one communicate an intended message. For example, even if the viewer imagines a slightly different model than the author, visualizing uncertainty can convey what other values are possible for the data that is shown — adding some consistency to the data that viewers mentally “fit” a model to. This should reduce degrees of freedom in the inference process, bringing the viewer’s inferences closer to the author’s goal. Even better, the author could visualize uncertainty that directly corresponds to the model they the viewer to infer, reducing many degrees of freedom.

Thinking more deeply about the process a viewer engages in when they examine a visualization may be the key to recognizing the value of depicting uncertainty. One reknowned data journalist I interviewed seemed to sense this, saying:

However, it can be challenging to identify an inference task in contexts where visualizations are intended simply to inform. That’s where visualization research might help. In addition to producing tools and techniques for , , and uncertainty, researchers could help authors , and , and reason about the . As applied as the field of data visualization may seem, we need theory to guide the empirical data we collect and the tools that we build. A deeper understanding of how uncertainty can help visualization authors communicate could lead to a world in which presentations of data are a bit more honest. And who doesn’t want that?


  1. In a Bayesian framework, the reference distribution is a set of draws from the posterior predictive distribution, the distribution of the outcome variable implied by a model that used the observed data to update a prior distribution of beliefs about the unknown model parameters (Gelman 2003, 2004).

Multiple Views: Visualization Research Explained

A blog about visualization research, for anyone, by the…

Jessica Hullman

Written by

Assistant Professor, Computer Science (and Journalism!) at Northwestern University. I research data visualization and the communication of uncertainty.

Multiple Views: Visualization Research Explained

A blog about visualization research, for anyone, by the people who do it. Edited by Jessica Hullman, Danielle Szafir, Robert Kosara, and Enrico Bertini

More From Medium

More from Multiple Views: Visualization Research Explained

More from Multiple Views: Visualization Research Explained

More from Multiple Views: Visualization Research Explained

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade