IEEE VIS 2018: Color, Interaction, & Augmented Reality

IEEE VIS 2018 was full of exciting advances in visualization, including papers, panels, and workshops on guidelines for visualization, the intersection of machine learning and visualization, systems for analyzing text, time series, and molecules, and more. This post highlights three papers that offer particularly exciting advances from this year’s conference: one focusing on a classical design conflict, a second exploring ways to measure user interaction, and a third introducing a new tool for building immersive visualizations.

When Guidelines Clash: Determining Which Color Means More
Schloss, K. B., Gramazio, C. C., Silverman, A. T., Parker, M., L., and Wang, A. S. Mapping color to meaning in colormap data visualizations. (Paper, Honorable Mention for Best InfoVis Paper)

When you use color to encode values, you may find yourself running into a seemingly simple problem: how do I orient my color ramp? Designers have long-held intuitions about the right ways to match color values to data. For example, some people insist that darker colors always entail larger quantities (the dark-is-more bias). Others recommend that the colors that appear the most opaque appear largest while those that appear most transparent appear smallest (the opaque-is-more bias). However, designers have no data to support either of these guidelines.

The lack of evidence supporting these guidelines is especially problematic as they can suggest conflicting mappings. Consider a blue-to-white color scheme. On a white background, both the dark-is-more bias and the opaque-is-more bias would suggest the darkest blue maps to the largest value. However, on a black background, opaque-is-more suggests the lighter values are larger while dark-is-more still maintains that the darkest blue is larger. At least one of these guidelines must be wrong.

Researchers from Wisconsin, Brown, and CalTech measured these biases across a set of common color maps and background colors. They found neither dark-is-more nor opaque-is-more sufficiently explain how people intuitively interpret colors, but that the optimal mapping of color and value varies as a function of the color ramp and background color. Their recommendation: avoid opacity bias altogether by choosing colors that avoid appearing transparent on a given background.

This study is a great example of how design and empiricism do not have to be at odds: design intuitions help raise interesting questions that can be answered through experimental methods. Experimental results can be interpreted in the context of design to improve visualization guidelines and lead to grounded practices for better visualizations. While there has historically been contention between the visualization approaches from design and empirical traditions, bridging the two can foster a holistic understanding of what makes visualizations work.

Metricizing Interaction and Engagement
Feng, M., Peck, E. and Harrison, L., 2018. Patterns and Pace: Quantifying Diverse Exploration Behavior with Visualizations on the Web. (Paper)

Measuring visualization use, engagement, and effectiveness helps us understand what makes a visualization useful but is notoriously difficult to do. We can use these metrics to understand user behavior and even integrate these metrics into interactive systems. For example, metrics capturing potential biases when exploring data in visualizations allow systems to automatically alert users to biases in their own analysis behavior.

Metrics in visualization typically focus on providing insight into what patterns and data differences people see when looking at visualizations. However, quantifying the ways people interact with visualizations can tell us something about how effective our interaction design is, how engaged people are with the data, and what kinds of information people are interested in. Researchers from WPI and Bucknell drew on common, interpretable metrics from other fields to generate two new metrics for measuring how people interact with web-based visualizations.

The first metric compares what parts of a visualization different people engage with (exploration uniqueness). Natural language processing offers a metric to characterize how important a given word is to a document: term frequency-inverse document frequency, or TFIDF. The authors use TFIDF to provide a simple, computable metric for exploration uniqueness by counting how often a person interacts with a data point relative to how often all people interact with that data point. This measure lets visualization designers estimate how critical certain components of a visualization are to a user’s understanding of the data.

The second metric explores temporal patterns in how people interact with visualizations, looking at when and for how long people engage with different pieces of a visualization. To quantify this exploration pacing, they treat interactions as a discrete signal and use a wavelet decomposition to compute an aggregate description of the signal frequencies. This measure captures how long people spend actively engaged with the visualization.

This research exemplifies how other analysis domains may inspire new solutions to visualization challenges. The paper notes how these metrics can reveal patterns in visualization and support comparisons between visualization approaches. However, metrics like these that quantify visualization consumption may also offer us a way to systematically reason about how well visualizations achieve broader goals such as insight building, trust in data, and effective communication.

New Toolkits for Immersive Visualizations
Sicat, R., Li, J., Choi, J., Cordeil, M., Jeong, W.K., Bach, B. and Pfister, H., 2018. DXR: A Toolkit for Building Immersive Data Visualizations. (Paper)

Visualization research commonly produces tools that make creating visualizations easier, such as D3.js, Vega, and Charticulator, However, these tools traditionally focus on building visualizations for the desktop. Recent advances in alternative display technologies have led to an increasing demand for visualization tools that move data analysis beyond the desktop. While researchers are often skeptical of the necessity of these technologies for visualization, little evidence exists to back this skepticism, due in part to the high barriers to engaging with these technologies.

Mixed reality (MR) technologies, including augmented and virtual reality devices, are arguably the most visible of these new display technologies in recent years. Advances in consumer grade hardware like the Microsoft Hololens, Oculus Rift, and HTC Vive provide a platform for immersive and compelling interactive experiences. However, development tools for creating these experiences are limited to traditional game engines like Unity and Unreal, which are not optimized for crafting data visualizations.

To remedy this gap, researchers from Harvard, Ulsan, Monash, and Edinburgh developed DXR, a toolkit for prototyping Data visualizations in miXed Reality (available for download here). DXR builds on the Unity game engine to provide a combination of both WYSIWYG and traditional code-based prototyping for immersive visualizations developers can readily deploy to common commercial MR systems. By offering an extensible toolkit on a common development platform, DXR stands to significantly reduce barriers to crafting common interactive visualizations in MR.

While our understanding of the utility of MR for visualization is still limited, tools like DXR could help accelerate growth in this area. By making it easier to build and deploy immersive visualizations, we can better understand exactly when and why these display technologies are useful for data analysis. These tools aid both skeptics and supporters alike, allowing them to build new applications and rapidly conduct studies that test the limits of what immersive visualizations might offer.

--

--

Danielle Szafir
Multiple Views: Visualization Research Explained

Assistant Professor @ CU-Boulder working at the intersection of data visualization, visual cognition, and HCI. More at http://cmci.colorado.edu/visualab