A Reflection on VIS2019: Or, How Doomed Are We?

Michael Correll
Nov 19 · 11 min read
Masks from the Kawkwaka’wakwa’s Hamatsa secret society, from the collection of the UBC Museum of Anthropology
Masks from the Kawkwaka’wakwa’s Hamatsa secret society, from the collection of the UBC Museum of Anthropology
Masks from the Kwakwaka’wakw’s Hamatsa secret society on display at the UBC Museum of Anthropology in Vancouver. I think it’s a little odd that so much of the wider world’s conception of the visual and aesthetic culture of the indigenous peoples of the Pacific Northwest comes from items and artworks that were supposed to be kept secret, but that’s a post for another day.

I recently attended the IEEE VIS conference in Vancouver, BC. There were a lot of interesting and compelling talks, but of course my sunny and optimistic disposition kept returning to the same thought: “aha, this is the year when the wheels start coming off.”

What I mean by that is that academic visualization work is often situated within computer science (hence why our main conference is an IEEE conference, and most [but not all!] of our bigwigs are in computer science departments). But this allegiance is more or less just an accident of history. Sure, you need some computer science skills (computer graphics, databases, statistical programming, say), but I wonder whether we would still be seen as a primarily technical discipline if our tools were better. An analogy that I’ve tried out a few times (with admittedly poor success) is to imagine that WYSIWYG word processors like Microsoft Word never existed, and so the major conferences in literature and technical writing all happened under the auspices of a computer science-driven typesetting conference, and you’d see papers for new LaTeX packages in between announcements of new novels.

People have been communicating with data for centuries before the computer. But if all we have is a computer science background, and we lack grounding in psychology, statistics, or communication, then we’re going to start running into trouble once our elegantly designed charts and graphs start meeting the squishy and biased brains of the people we’re showing them to. The crumbling of that computational edifice, the decline of visualization as fundamentally a computer science problem, is what I started seeing in the work at VIS this year. To be clear, I think this is a good thing! It’s a sign that the field is maturing and expanding outwards and no longer just about throwing pixels on screens. I’ve selected a handful of talks that I saw as contributing to the “de-CSifying” of visualization, and how they might get us thinking about the fact that hey, we’re communicating information to people, not just maximizing data ink.

Bad News About Bias

A Task-based Taxonomy of Cognitive Biases for Information Visualization

Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia Bezerianos, Pierre Dragicevic

Cognitive biases, always near and dear to my heart, received a lot of attention this year. The problem I’ve had when trying to study them in any systematic way is that every single field seems to be generating their own terminology and methods, and they don’t give me much to go on. For instance, if I give people data with an outlier in it, maybe I would expect them to fall victim to the “availability bias” or the “saliency bias” or “von Restorff effect” and give the outlier too much weight. Or maybe I expect them to succumb to the “Ostrich effect” or the “status quo bias” or the “normalcy bias” and suppress the outlier. These are all partially overlapping concepts with different predictions for behavior that would be measured in subtly different ways. If opposing actions to the same stimulus can both be justified by the same theory (and both are “irrational”), maybe we need a better theory. The Dimara et al. work is a much-needed effort to enumerate and clarify some of the fuzziness around cognitive biases. Structuring this space is vitally important if we’re actually going to do anything about the ways that people actually reason.

Towards a Design Space for Mitigating Cognitive Bias in Visual Analytics

Emily Wall, John Stasko, Alex Endert

Speaking of doing things about cognitive biases, a short paper by Wall et al. (the short papers were very good this year, by the way; I’m glad we added tracks for them) enumerates some of the things people have proposed to address cognitive biases in analytics. This paper talks about some of the existing work in the space, including a lot of mixed initiative approaches that I would love to see in general analytics systems. Maybe highlight that field you’ve been ignoring this whole time? Maybe have the system present alternative or oppositional analyses so you keep your confirmation bias down to a dull roar? I think there’s a lot of power here in designing systems in mixed initiative ways to deal with these bias. But wherever there’s power, there’s the potential for harm. I’m very worried about what happens if we get these sorts of questions about agency and bias wrong, so I’m glad people are keeping track of this space.

The Curse of Knowledge in Visual Data Communication

Cindy Xiong, Lisanne van Weelden, Steven Franconeri

The last paper I want to talk about related to bias, by Xiong et al., talks about the potential of visualizations to act as ambiguous figures. In their study, they presented participants with a time series chart with several potentially interesting features (e.g., two lines cross, or two other lines approach each other and then veer apart) and then told each participant a different plausible story about one and only one of the features. Participants, primed in this way, not only identified the specific feature that they’d been told about as the most salient, but also said that a naïve viewer, who had been told nothing, would also pick out that feature as the most interesting. To recap: the things we already know about the data alter how we think everybody else perceives the chart. This is pretty bad news, to me: it suggests that a lot of the our charts that we hope are changing minds and encouraging exploration might just be big fancy confirmation bias machines. It also suggests that persuasion and prior knowledge is central to how people make use of charts.

Stuff We’ve Swept Under the Rug

I will admit some bias here (if that wasn’t already obvious), but I’ve found that one of the problems with the narrow scope of visualization as a computer science discipline is that we’ve got these huge problems that we sort of just stick our heads into the sand and ignore because they are too complex or because our standard approach of “just showing people the data” won’t cut it. They are problems where we need to talk and learn from actual human beings, not just ones we can design our way around in a vacuum.

Why Authors Don’t Visualize Uncertainty

Jessica Hullman

Uncertainty is one of these elephants in the room. Our world is highly uncertain, uncertainty is an inescapable component of data collection and analysis, and two primary uses of visualizations are to predict and to persuade, so you think we’d be visualizing uncertainty constantly. And yet, we rarely do. Or when we do we use esoteric things like error bars and uncertainty cones that we know are prone to misinterpretation. Hullman’s paper is an analysis of why people ignore uncertainty in their work, and it’s somewhat depressing: from an assumed lack of data literacy to a pressure to reduce complexity, there seems to be lots of incentives to hide uncertainty altogether. I’m intensely worried that these pressures mean that many or even most of our visualizations in decision-making contexts have a potential to do a lot of damage and lead a lot of people astray.

Illusion of Causality in Visualized Data

Cindy Xiong, Joel Shapiro, Jessica Hullman, Steven Franconeri

The next super dangerous thing we ignore is what I call “causality catnip.” It’s super super tempting to look at a visualization and think that just because things in our data look connected, that they actually are related. Visualizations have rhetorical force in making us draw associations in our data, but very rarely have the logical or statistical structure for us to verify those associations. This study by Xiong et al. examined how strongly people assume causality from visualizations and found, somewhat alarmingly, that relatively minor changes to the chart design (lines versus bars, say) and the level of aggregation can produce sizable impacts in how much causality people assume is in their data. That would be a bad enough result, but what was also troubling to me is that even the “safest” visualizations still had people making causal assumptions about their data the majority of the time.

Sociotechnical Considerations for Accessible Visualization Design

Alan Lundgard, Crystal Lee, Arvind Satyanarayan

The last “uh oh, why aren’t more people studying this” paper (another short paper, by the way) by Lundgard et al., deals with their reflections on a project making tangible visualizations for students at the Perkins School for the Blind. There’s lots of visualization bad habits that this paper surfaces in a direct and refreshing way. The first is accessibility: visualization folks like to make grandiose claims about how we’re “democratizing data” but we still seem to be designing charts for the pretty narrow subset of people who 1) have the right kinds of statistical literacy 2) have the right digital infrastructure 3) have an assumed set of visual and motive capabilities that are not even close to universal. People in visualization have reacted with ignorance and occasional outright aggression to the idea that our craft is not particularly accessible, and that we can and should do better. The second issue this paper discusses is what they call “parachute research” (the researcher drops in to the community out of the blue, tackles the problem, and then takes off once the “job is done”). This model of collaboration, which is sadly common in visualization research, shows a lack of engagement and longevity to our work that limits its impact and narrows its utility. We need better, more equitable models for how to collaborate with others!

Moving Away From Positivism

Criteria for Rigor in Visualization Design Study

Miriah Meyer, Jason Dykes

One of the odder artifacts of the fact that academic visualization has hitched its star to computer science is that we also brought along logical positivism with us, a philosophical standpoint that we’re uncovering objective truths through logically sound methods. This would be okay if our discipline had a lot of rigorous theory and first principles to draw on and things like that, but we largely don’t. I personally have a smattering of concepts from perceptual psychology (most of which are a few decades out of date), a few bits of information theory, and there are these “design principles” things that appear to be mostly from educated guesses by folks like Tufte that seem to evaporate whenever we look at them with too much scrutiny. It’s not nothing, but it’s not really enough for me to claim that a positivist lens is the right way to structure our work. Meyer and Dyke’s paper (in the aptly named “Provocations” session) attacks this presumption directly, and argues for an “interpretivist” approach to visualization evaluation. We’re dealing with people in very idiosyncratic circumstances, and the things we learn doing our work are highly unlikely to be capital-T universal Truths.

Visualizing Temporality and Chronologies for the Humanities

Johanna Drucker

More than how we evaluate visualizations, positivism also impacts how our visualizations are interpreted. Lots of people assume that “data” are just objective facts about the world, and that there’s no arguing with a visualization. I’m reminded of Peck et al.’s work on rural attitudes towards visualizations. Certain groups of people would ignore political leanings or the credibility of sources just because, well the data is the data. Drucker has been talking about this problem, this reification and false objectivity of data in visualizations, for years. Her capstone talk highlighted how this can be particularly problematic for things like geography and time. We all have our subjective and elastic impressions of time and space, and just visualizing things on a common “objective” scale flattens and erases all sorts of important information, especially for humanities data. Visualizations and maps are not mirrors of the world, but designed artifacts representing a set of perspectives.

Approaching Humanities Questions Using Slow Visual Search Interfaces

Adam Bradley, Victor Sawal, Christopher Collins

The last artifact of positivism in our visualization praxis is how our visualization systems are designed. I’ve used what I call the “ethernet delusion” in the past to describe visualization design practice: the notion that the one goal of information design is to maximize the speed and throughput of data transmission to the human brain, as though a visualization is just a big ethernet cable sticking out of a server and into your occipital lobe. Missing from this is that comprehension and understanding is a process, and that more than just communicating data we also need to promote reflection and skepticism and all of the parts of generating and owning knowledge that are distinct from just having facts. In something of a parallel to the “slow food” movement, Bradley et al. propose a “slow visualization” movement that encourages us to develop systems that force people to slow down, and develop ownership and guidance over their analysis. Dovetailing nicely with the Meyer and Dykes work mentioned above, I wonder if our standard hammers for evaluation (quantitative studies of time and error, with perhaps a perfunctory qualitative analysis of user satisfaction) discourage us from building us these systems, and supporting use cases beyond just getting a set of numbers in front of our eyeballs as quickly as possible.

“VAST House Style”

This one wasn’t a paper, but Enrico Bertini had a tweet this year reflecting on the conference that got me thinking. He refers specifically to the visual analytics arm of the conference (VAST), but I think it’s true for the conference as a whole as well:

Just like the New York Times and FiveThirtyEight and xkcd and so on have their instantly recognizable visual signatures, I fear that Enrico Bertini is pointing out what VIS’s house style is: lots of coordinated views in a dashboard that don’t make any sense at a glance unless you’ve spent a lot of time with the designers or with the data.

I get the impulse: we want to show off how much complex thinking we did, and since we’re usually working with people who have extremely complex problems and data sets, one or two simple views aren’t going to cut it. And these sorts of systems usually result in impressive teaser figures for our papers. But I wonder how much utility people get out of these “kitchen sink” views. We’ve got all of these cool narrative structures for visualizing data, and allegedly a lot of expertise and simplifying and abstracting data to usable forms, but we’re still making people dive into the deep end immediately with our designs. How would a journalist present the information that we’ve so painstakingly organized for our domain collaborators? Or a statistician? I bet it wouldn’t look much like yet another kitchen sink dashboard. I bet there would be a lot more narrativity and text and explanations if our professional rewards were less centered around engineering flashy systems.

Conclusion

The academic visualization community is already a radically different beast today than it was ten years ago. I had a minor moment of disbelief when I realized that I could make this claim with certainty; this was my 9th time attending VIS. And I bet it will be nearly unrecognizable ten years from now. It seems silly to think of a field as relatively young as ours having an old guard that is being deposed, but it sure seems like that’s what I’m seeing. We’re adopting all sorts of new perspectives and priorities and even epistemologies. There are other trends that I picked up on at the conference (I intentionally avoided the subject of machine learning in this post, for instance, despite ML being the omnipresent subject of discussion), but I wanted to record these kinds of sea changes as they happen. What troubles are we running into, in our current ways of thinking about visualization design? What needs to happen to our discipline to gives us the tools to think our way through these problems, and how can we bring those changes about?

Multiple Views: Visualization Research Explained

A blog about visualization research, for anyone, by the people who do it. Edited by Jessica Hullman, Danielle Szafir, Robert Kosara, and Enrico Bertini

Michael Correll

Written by

Information Visualization, Data Ethics, Graphical Perception.

Multiple Views: Visualization Research Explained

A blog about visualization research, for anyone, by the people who do it. Edited by Jessica Hullman, Danielle Szafir, Robert Kosara, and Enrico Bertini

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade