Makeover by Proxy

Brian Ondov
Sparks of Innovation: Stories from the HCIL
5 min readOct 24, 2019

How a derailed perception study led to us rethinking the fundamentals of our work.

Science works in mysterious ways. Last year at the IEEE VIS conference, our collaborative team from Northwestern and University of Maryland presented one of the first perceptual studies on visual comparison between two charts. This year, we were hoping to extend this work to new settings, but found instead that none of our expectations from last year were correct. In our paper “The Perceptual Proxies of Visual Comparison,” which we will present on Thursday, October 24 at the IEEE VIS conference in Vancouver, BC, Canada, we will tell the story of how this led to us question the very fundamentals of our work. The spoiler: we adopted the idea of perceptual proxies as little “bots” (or computer programs) that can be used to explain how humans perform this visual comparison task.

To tell the story properly, we must go back to our work from last year where we looked at visual comparison: the ability to compare quantities across two different charts. People in vision and visualization has spent considerable effort studying perception in individual chart types. However, for visualizations to reach their full potential, we need to see multiple data series at once, enabling comparison. Last year, we showed that the way this comparison is presented, e.g. how small multiples are arranged, can significantly impact performance for some tasks. For example, we found that charts overlaid on top of each other are better than separate charts for extracting the biggest individual difference. We also saw interesting effects of symmetry and animation. Not only were these good results in their own right, but it was evidence for adding a whole new dimension to the problem space. This “cube” of factors — mark, task, and arrangement — looked like a rich new field of study just waiting to be explored.

But when we actually started to explore this cube further, things weren’t as rosy. Whereas last time around, clear and interesting factors emerged, this time we were left with muddled, inconclusive results. Had we just gotten lucky with the combination we had chosen before? Chaos ensued. Soul searching followed. None of our expectations were correct. What to do? Our project had hit a major roadblock. The deadline was nearing, and the fate of our research paper, and any to follow, was in question.

But sometimes taking a step back can put things in a new perspective. Determined, our Northwestern collaborator Nicole Jardine pondered the problem with her advisor and coauthor Steve Franconeri. In the eleventh hour (less metaphorically than you might think), our colleagues sent us a plan for how to salvage the paper by rethinking the fundamental underpinnings of our work. This tweet from HCIL coauthor Niklas Elmqvist captures his reaction (yes, that timestamp is five days before the submission deadline):

What was this radical new plan? Simple. What if instead of designing, piloting, and debugging dozens of combinatorial experiments, we could find some higher-level, organizing principle that could predict them? That’s a neat idea in theory, but actually finding the correct principle is a lot harder to do in practice. Nicole had a proposal, though: if we could express all of the potential visual tasks in terms of “bots” — simple algorithms that serve as perceptual proxies of the human visual system — we could compare the performance of these bots with the performance of our actual users. The bot that most closely tracks the human is the best candidate for how the human visual system works.

What’s an example of a bot, or proxy? We looked at many candidates. A straightforward one for two groups of barcharts A and B is to select the group which contained the biggest bar. Another proxy is to select the first, top bar. These are simple but not very intelligent choices. A more complex one could calculate the amount of area that all of the bars in a chart take up, and select the largest, or the one with a centerpoint furthest to the right (our bars grow from left to right).

Having defined a bunch of proxies, we examined these by going back to the data series that we used in our perceptual experiments. By comparing the choices suggested by the proxies to the choices people actually made, we were able to find which ones are the best candidates for how our brains perform the tasks at hand. This helped us realize that tasks can be grouped by whether they are local or global. Local tasks are concerned with the individual bars in each chart, whereas global tasks consider the shape of all the bars in each chart series as a whole. This insight may help designers choose arrangements by task without having empirical evidence for every possible combination. For example, when looking at trends over time, such as stock market data, it would make sense to use an arrangement that supports local comparisons.

As is often the case, this work raises as many questions as it answers. For example, what if a viewer is not primed to perform a specific task as they were in our experiments? How do they select the right bot, or proxy, to run? How do these results extend to mark types besides bars? More evidence is needed to support these ideas, both from experiments and review of existing literature. At least, though, our insights may provide some hope that the situation is a little more promising than we feared at our darkest moment of despair.

Here is the detailed information for this paper:

  • Nicole Jardine, Brian Ondov, Niklas Elmqvist, Steven Franconeri. The Perceptual Proxies of Visual Comparison. IEEE Transactions on Visualization and Computer Graphics (IEEE InfoVis 2019). (Best Paper Honorable Mention). [PDF]

--

--

Brian Ondov
Sparks of Innovation: Stories from the HCIL

Graduate student in Computer Science at University of Maryland, College Park. Research fellow at the National Institutes of Health.