How We See Our Data: Vision Science and Visualization, Pt. 1

Danielle Szafir
Jul 24 · 14 min read

TL;DR: Vision science can help illuminate how visualizations take us from sight to insight. We can use the ways people see data to craft better visualizations by challenging existing intuitions, discovering new guidelines, and designing new methods for visualizing data grounded in our ability to make sense of what we see. This post is the first in a two-part series and focuses on the experimental side of how vision informs visualization.

Our understanding of what we see and how we see it is constantly evolving as part of vision science. Vision science covers topics ranging from how we process shapes and colors to how we interpret the structure of the world to how we remember things we’ve seen. We can use fundamental concepts, such as what visual features people use to interpret visualized data (e.g., the area of a mark versus its height) and the mechanisms by which we see (e.g., the processes by which we estimate trends or compare features) to anticipate what kinds of insights people may glean from visualizations.

Vision science has increasingly shaped visualization research, enabling new theories about how, when, and why we gravitate towards the visualizations we do. Visualization research primarily leverages vision science in three ways: (1) to challenge intuitions about how we interpret visualized data by measuring data perception using vision science techniques, (2) to discover new information about how we see data using vision science theories and methods, and (3) to inspire visualization designs that use theories of how we interpret visual information to overcome limitations in prior approaches. This article focuses on the first two pathways to explore how vision science informs visualization: how can we use what we know about the ways we see the world to understand what makes a visualization effective?

The ways we visualize data change the patterns people see in that data. For example, people might describe A and B with respect to the data distribution (left), relative changes (center), or relative magnitude (right). Graphical perception experiments measure how well different visualizations allow people to estimate different properties of their data.

Traditional approaches to answer this question focus on graphical perception. Graphical perception studies measure what methods for encoding data result in the best performance, where “best” is typically defined as either how quickly or accurately we estimate certain statistics from visualized data. However, we know that factors such as the questions an analyst seeks to answer or properties of the dataset can change what visualizations are most effective. This limits how well we can generalize graphical perception results to visualization design: what works “best” depends on the contexts we study these visualizations under.

If we can instead hypothesize why particular visualizations work well, we can more generally reason about when and how a visualization may work for a given scenario. Sadly, most visualization guidelines rely on assertions about human vision that are woefully out-of-date: visualization systems frequently justify their design choices using findings from over 30 years ago. Our sense of sight is complex, and what we know about how it functions changes constantly. Vision science is a rapidly evolving area of research that studies a wide-ranging collection of phenomena such as how we search for a given object, what elements of a scene we remember, and how we make sense of complex scenes with hundreds or thousands of different objects (this article summarizes just some of the major aspects). Recent cross-overs between visualization and vision science researchers attempt to understand how these phenomena change what we know about designing visualizations. Through these studies, we can build a deeper understanding of both what and why we see what we do in a visualization and use this insight to more effectively visualize data.

To Challenge: Sharpening and Refining Intuitions

Most guidelines for visualization arise from decades of design intuitions, such as “avoid using pie charts when possible” or “maximize the data-ink ratio.” Yet how we interpret data using even some of the most foundational visualizations is still poorly understood. We can use theories from vision science to interrogate existing design guidelines and prove, disprove, or refine these guidelines through experiments grounded in theories about how we process the information that we see.

Visual Attention & Pies

Classical graphical perception studies in the tradition of Cleveland & McGill can point us towards visualizations that do or don’t work well for a specific kind of question. However, many of these studies focus on providing a holistic comparison of what visualizations do and don’t work for a given task rather than on the components of a visualization design that explain the results. As a result, these studies conventionally tell us a whole lot about very little: we know what visualizations work for a specific scenario, but not why they work (which lets us generalize our results). If we can instead design studies that explore why a visualization may or may not work for a given task, we can better generalize our findings to design. Vision science provides us with a means to do so by describing how people make sense of the shapes, colors, sizes, and other visual features of a scene.

Recent work on how we read pie charts provides an excellent example of how to use vision science to understand what makes a visualization tick. Pie charts provide a classic example of where such heuristics (intuitive rules of thumb) dominate our thinking: designers know that pie charts often aren’t great, and we have long thought that pie charts are bound by our abilities to interpret the angles of each wedge (largely based on a series of small-scale studies from the 1920s). Skau & Kosara wanted to understand what people pay attention to when using pie charts. To do this, they designed a set of quasi-pie charts that represented data using different visual features (e.g., arc length, angle, area, etc.). They then measured how accurately people estimated data proportions using each representation.

Kosara and Skau (2016) found that people probably use features tied to the arc length and area of a slice of a pie chart rather than its angle to interpret data.

Contrary to long-held beliefs, the study found that people estimate values in pie charts most similarly to how they read visualizations using arc length and area but not angle. These results suggest that people are not paying attention to angle when interpreting visualizations like pie charts, but rather to the area and/or arc length consumed by each slice of the pie. We can extend these findings to hypothesize about other pie chart-esque kinds of designs and to provide a grounded explanation for visualization designers’ natural aversions to techniques like exploded pie charts. By understanding the visual features people use to interpret these visualizations, designers can better anticipate biases introduced in visualizations representing data proportions.

Separating & Expanding Data Dimensions

Understanding how we process the data we see can also expand on and sharpen design intuitions by providing actionable models designers can use to improve visualizations. For example, we use guidelines about separability — the amount of interference between any pair of channels — to inform the channels we use in multidimensional visualizations.

Consider the visualizations below. Each encodes two points along two dimensions. Both visualizations represent one dimension using lightness; however, the visualization on the left uses each point’s vertical position to encode the second dimension while the right uses each point’s hue. Now try comparing the lightness between each set of points. When we use both lightness and position, you can readily estimate data differences along each dimension (the first point is higher and lighter than the second). These dimensions are said to be separable. However, in the visualization on the right, it is much harder to compare the relative lightness and relative hue differences between the points. These dimensions are more integral: they are harder to visually pull apart.

When we build multidimensional visualizations, we generally want to use channels that are separable — they don’t interfere with one another. We have qualitative rankings of what channels are separable and integral, but can we measure how much any given dimensions will interact?

Conventional design wisdom says that color and position are highly separable: encoding Dimension A with position does not (in theory) interfere with data we read from Dimension B encoded using lightness. Visualization guidelines classically arrange pairs of visual channels on a continuum from “more” to “less” separable. When we want to most faithfully represent each data dimension, we prioritize those dimensions that maximize separability. However, if we could model how different channels interfere with one another, we can build visualizations that correct for this interference to help people more accurately interpret their data.

Vision science findings about conjunctive search — our ability to find objects defined by multiple features — tell us that features interact in systematic ways: we can use the features of an object to predict how quickly we can find complex objects in the real world. Based on these ideas, Steve Smart and I conducted a set of experiments to model how shape, size, and color interfere with one another in scatterplots. These experiments asked people to detect whether or not two datapoints shared the same value on a specific dimension. This experiment allowed us to explore just how much any given channel (e.g., shape) interfered with reading data along another (e.g., size).

You can try this experiment for yourself: in the two scatterplots on the left, do the colorful squares share the same color value or different values? On the right, do the blue shapes have the same size value or different values? In the case of the colorful squares, each pair uses two different greens (though the pair of greens in the top scatterplot is the same as the pair in the bottom scatterplot), while each of the shape pairs shares the same size. However, we found that people consistently found it around 30% harder to distinguish colors on the outlined squares than the filled squares and that the “T” shaped points were seen as larger than other shapes more than 80% of the time.

Our ability to separate data encoded using shape, size, and color is far more complex than traditional guidelines predict. We can use models of these abilities to proactively design around potential data analysis errors introduced by encoding multiple dimensions at once.

We systematically adjusted size and color difference across shapes to generate a set of functions quantifying how color and size perception varied across different shapes. These models both confirmed existing intuitions about separability, but also showed where existing guidelines fall short: while changing the shape of a mark shifted perceptions of size and color, changing size or color did not change how well people reasoned about a mark’s shape. That is, separability is asymmetric. The way that shape, size, and color interacted proved too complex to be explained by intuitive heuristics. However, by modeling these interactions as functions of a point’s shape, size, and color, we can refine and expand our rankings of separable channels as well as update existing libraries and tools to account for nuanced interference between dimensions in multivariate visualizations.

To Discover: Building New Guidelines

Vision science offers theories about the ways we achieve different goals when looking at a scene. We can use theories about visual attention — our ability to find a given object in a set of objects — to predict how a visualization’s layout helps people find interesting points (see Haroz & Whitney’s 2012 VIS paper). We can look to ensemble coding — our ability to rapidly estimate properties of the distribution of features in a scene— to understand how we might estimate the average value in a line graph. The theories, models, and frameworks offered by vision science can drive new visualization guidelines by predicting how people use the visual information available through different visualization techniques to answer a question.

Understanding why people might perform a given task well for a specific visualization design helps to generalize the results of an experiment into actionable guidelines for use with a broad set of data and domains grounded in how we expect people to process visual information. A classic example of this is pop-out: we know that people can quickly find a red point in a field of blue points or a large point in a field of small points. Interactive visualizations use pop-out to help people quickly find important datapoints. For example, when we brush over an interesting datapoint, we can highlight other datapoints with matching characteristics to make those points easy to find. We can use these ideas to ask (and answer) new questions about what makes a visualization effective for a given task.

Grounding Comparisons

We can compare two graphs by placing them side-by-side (juxtaposition), layering them on top of one another (superposition), or by computing the similarities and differences in their structure (explicit encoding).

Comparison is amongst the most common visualization tasks: every time we look at a visualization, we are either explicitly comparing different values or implicitly comparing the data we see against some prior expectations. We have a relatively well-defined toolbox of techniques for comparing two datasets in a visualization: we can superposition the data (i.e., layering data in the same physical space), juxtapose data (i.e., placing data side-by-side), and explicitly encode comparative relationships in data (i.e., directly encoding the difference between corresponding datapoints). Historically, we’ve had no formal guidelines to choose between these techniques: designers relied on their own intuitions coupled with trial-and-error.

However, a recent study by Ondov, Jardine, Elmqvist, & Franconeri leveraged ideas and techniques from vision science to ground our understanding of how people actually compare items using these techniques. The study broke comparative designs down into three parameters related to phenomena studied in vision science — colocation (are the datasets juxtaposed or superimposed), symmetry (are juxtaposed datasets mirrored or duplicated), and motion (juxtaposing values in time using animated transitions). They measured how these factors change comparison using two tasks — finding which point changed the most between two datasets and identifying which datasets were most correlated — to understand how and when particular comparative designs are most effective.

By measuring how these techniques affect our abilities to conduct these tasks with common visualizations, the study results provide grounded guidelines about the trade-offs of different designs. For example, while we conventionally think of superposition (e.g., designs heavy on co-location) as a “gold standard” for comparisons in simple datasets, the study found only limited evidence that superposition supported better comparisons than traditional small-multiples views. As comparisons became more complex, this advantage disappeared and mirroring, a seldom-employed technique in comparative visualization, improved people’s abilities to compare their data overall. We can use these results to inform new, more grounded guidelines for comparative visualization design.

Designing for Correlation

Many data analysis tasks also require analysts to estimate how correlated different variables are. Yet, correlation is a complicated task: it requires estimating a complex relationship aggregated across large collections of points. In vision science, Rensink & Baldridge modeled how accurately people could estimate correlation as a function of the data. They measured this accuracy by asking people to either pick which of a pair of scatterplots had the highest correlation or to adjust the correlation of a sample scatterplot until it was halfway between two reference scatterplots. Over a series of experiments, they found that people’s estimates primarily reflect the distribution of points in a scatterplot rather than the bounding shape formed by the points; that is, people estimate correlation based on where the points are densest. This strategy allows people to estimate correlation in ways that are more robust to outliers by instead focusing on the “meat” of the data distribution.

While this model offers insight into how people use scatterplots to estimate correlation, it provides little information on how well people might estimate correlation using other visualization designs that don’t convey this distribution as directly. Researchers from Tufts and Northwestern adapted Rensink & Baldridge’s methods to compare how well nine different visualizations, including scatterplots, parallel coordinates plots, and line graphs, communicate correlations between two variables. By measuring how sensitivity to correlations varied across these visualizations, they found that, despite their simplicity, scatterplots best support correlation estimates. The ranking provided by this study offers designers a new resource for reasoning about correlation estimates in different visualization types. Designers can use this model as a guideline for selecting visualizations for scenarios where analysts need to understand correlation.

Harrison et al (2014) measured how well different visualizations support correlation (each column represents a different level of correlation for the underlying data). For the statistically curious, these results were later thoughtfully reanalyzed by Kay and Heer in 2015.

Making Visualizations Memorable

In addition to statistical tasks, vision science can also inform designers about other critical aspects of visualizations, such as how well a person may remember a particular design. In a pair of studies, Borkin et al. measured how well people could pick out target visualizations they had previously seen from a large collection of rapidly presented distractor visualizations. These studies used methods from vision science designed to measure memorability in pictures of the real world. The researchers showed people a set of target and distractor visualizations in rapid succession (1 to 2 seconds per image). People pressed a key if they felt they had seen the visualization before, and each visualization was scored based on how often it was correctly or incorrectly identified as a repeat.

Colorful visualizations, those with strong titles, and those with familiar objects such as a football or T-rex were best remembered, while “clean” minimalist visualizations were least memorable. This work unexpectedly led to a dramatic increase in references to dinosaurs appearing in vis research! It didn’t matter how long people looked at the target visualization in the first phase of the experiment: designs that were memorable after looking at them for 1 second were generally also memorable at 10 seconds. These results do not suggest that we should introduce extraneous dinosaurs and footballs into all visualizations, but instead offer us a new trade-off to think about in design: how do we balance the precision offered in minimalist visualizations with the recognizability of more ornate graphics?

The design of a visualization influences how well we recall having seen it before. Visualizations that are more colorful or contain familiar objects are generally more memorable than those with minimalist designs (from Borkin et al, 2013).

Discovery studies that explore how we extract information from a visualization allow us to anticipate what factors of a visualization design support different insights into our data. Seemingly simple questions like “how do we actually read a pie chart?” or “how do we compare data values?” help designers predict how well new visualizations might perform based on the visual components of their designs. However, these studies provide a narrow bridge between visualization design and vision science understanding. Focusing on not only components of design but also how our brains transform those components from pixels to insights can provide actionable and generalizable knowledge for visualization research.

Putting Results into Practice

By understanding what and why people see what they do, we augment designers’ intuitions with evidence-based understandings to improve how we represent data. Visualization development is still equal parts art and science: aesthetics, conventions, and even unknown components of visualization processing all require a careful hand when crafting visualizations. However, a fundamental understanding of why we see what we do in a visualization can ground intuitions in known processes and help designers reason about the trade-offs and constraints in different approaches based on their own expertise with the data and domain.

There is still so much we don’t know about how people process information in a visualization. For example, what kinds of insight can we build from data in the periphery? How do we compare information across multiple datasets? When might optical illusions distort our data and how might we fix them? Visualization research has often drawn a crisp boundary between what constitutes visualization research versus vision science research. However, for even basic charts, we only have a limited understanding of why they work. This line must be blurred in order for our field to truly understand when and why a given visualization design works.

We can also use our understanding of how we process visualized data to overcome limitations in existing visualization designs and drive novel approaches to representing data. The next post will discuss how vision science might instead inspire approaches for visualizing data with an emphasis on how we design for lots of data.

Multiple Views: Visualization Research Explained

A blog about visualization research, for anyone, by the people who do it. Edited by Jessica Hullman, Danielle Szafir, Robert Kosara, and Enrico Bertini

Thanks to Michael Correll and Jessica Hullman

Danielle Szafir

Written by

Assistant Professor @ CU-Boulder working at the intersection of data visualization, visual cognition, and HCI. More at

Multiple Views: Visualization Research Explained

A blog about visualization research, for anyone, by the people who do it. Edited by Jessica Hullman, Danielle Szafir, Robert Kosara, and Enrico Bertini

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade