Patent #1,059,281 (Diving Apparatus for Marine Exploration and the Like)

Banning exploration in my infovis class

Eytan Adar
9 min readApr 26, 2017

--

I’ve banned the word “explore” from all project proposals in my infovis class. No explore. No exploration. No exploratory. No, you may not create a tool to “allow an analyst to explore the bird strike data.” No, you can’t build a system for “exploration of microarray data.” And, no, you can’t make a framework for “exploratory network analysis.” Just no.

The line that I use on my students is that: No one is paid to explore, they’re paid to find. I’m only 10% trying to be clever. Ninety percent, I’m dreading grading the output of projects that feature exploration as an objective.

I got to have lunch with John Tukey many years ago. We talked about birding. I wish we talked about “Exploratory Data Analysis.” For all the clever names he created for things (software, bit, cepstrum, quefrency) what’s up with EDA? The name is fundamentally problematic because it’s ambiguous. “Explore” can be both transitive (to seek something) and intransitive (to wander, seeking nothing in particular). Tukey’s book seems to emphasize the former — it’s full of unique graphical tools to find certain patterns in the data: distribution types, differences between distributions, outliers, and many other useful statistical patterns. The problem is that students think he meant the latter.

Somehow that term has given students, and some professionals, the license to be totally imprecise about what they were building, and (more critically) how to evaluate whether it worked. If you’re not seeking anything in particular, any tool that lets you meander through data is perfectly reasonable. It makes the job of deriving insight completely the responsibility of the end-user. In that world, any decision is a reasonable one, evaluation is unnecessary, and there is no grade but an A. But that’s not the real world and so I’ve banned “explore.”

Exploration is too unbounded in the context of building a tool. We need to be able to decide when exploration terminates. Forcing students to tell me what they want the end-user to find and/or what decisions they want to enable has led to better projects. It means both the students and I (and potentially a client) can evaluate a design decision in context of a specific set of tasks. We can ask the question, “does the visualization make salient the information I need?”

The Purpose of EDA

My model is that exploratory data analysis consists of two main parts and both are a type of finding:

  1. Perceptual classification —the analyst looks at the data and matches what they see against familiar patterns (call this pattern-finding): does the time series show an upward trend (the analyst knows different trends)? a normal distribution in a histogram (the analyst knows what different histograms mean)? an outlier in a scatter plot? obvious clusters or random noise in an MDS plot? a big change in two bars? etc. In “exploring” one recognizes these patterns in the data and makes a decision: is it time to buy? do I run a parametric test? do I remove an outlier? should k be set to 3? should I run a t-test? should I compare to a null model?
  2. Perceptual clustering — The analyst finds groups of similar patterns without necessarily leveraging known patterns (call this pattern-making). Here the analyst is looking for repeating patterns within the context of the dataset (rather than relative to past experience or domain knowledge): what time series plots seem similar (these five all go up for three periods and then down, but these other three are flat for 10 periods before dropping)? which rows in the microarray heatmap are the same? which are different?

In the context of my class — and maybe more broadly — a successful exploratory tool is one that lets the analyst find the patterns they are looking for in the data (quickly, accurately, reliably, scaleably). Being explicit about what those patterns are, is a critical step in defining the project but somehow the existence of the word “exploratory” let students think they could get away without the step. Banning it forces them to contend with the finding bits.

How do you know it worked?

I tried a different strategy early in my teaching and emphasized evaluation: How will you validate the success of your interface? Students were often flustered. They would eventually begin to define their research instrument based on “things found.” If they caught on, validation might include supporting the analyst in testing hypotheses, finding significant differences, outliers, patterns, etc. If they really caught on they’d start to articulate what decisions are enabled based on what is found (pattern A means buy a stock, pattern B means sell it). Unfortunately, if they didn’t catch on (a common occurrence), they’d tell me that they would evaluate their “exploratory tool” based on analyst “insights.” And while we, in research and practice, might have very specific definitions for insight, the students did not. “Insight” was as vague as “explore.”

I’m actually fairly sympathetic here to the students’ struggles. What I think is going on is that the students are confusing their own exploration of the data with what the end-user’s exploration will look like. That is, they are mistaking the design task of familiarizing themselves with the data with what the analyst might do. The student is often engaging in “exploration” for the purpose identifying patterns that influence their design. They are often missing background knowledge and develop it in this step. But this is not “exploration” for the analyst who may already have a mental model of interesting and uninteresting patterns. So fine, by all means, “explore” the data and engage in pattern-making but understand that your pattern-making and the analyst’s pattern-making/pattern-finding/decision-making goals are different. Forcing the definition of a project goal in terms of specific tasks and enabled decisions ensures that the tasks will be grounded and realistic to the analyst. It’s a bit cruel but also more real — your goals are not your client’s goals.

In denying the student the ability to frame their main task as exploration, they are forced to concede that what they want to find is not what their end-user may be looking for and then: (a) engage with their client or “create” a reasonable one with real tasks and decisions, (b) understand the data and tasks much more deeply, and (c) identify good validation strategies (no more insights!).

What about “surprise?”

Every once in a while I’ll get push back on this ban. “What about surprise” is a common refrain. After all, Tukey said “The greatest value of a picture is when it forces us to notice what we never expected to see.” Which is nice and all, but I don’t think what he meant was that we should build tools to walk aimlessly through the forest because eventually we’ll run into a zebra-striped unicorn (something neither the designer nor analyst could have predicted encountering so it would be impossible to design for). A more likely interpretation is that we should build tools that let us see the man-made cabin in the forest when all we’re expecting to see is trees (natural trees are expected and things that are not natural are possible — e.g., man made cabins — but are surprising given analyst expectations). Put into a statistical example: we may expect to see only normal distributions, a surprise (which is interesting because it would change my behavior from my default analysis) would be to see a Poisson distribution. Making sure I see that there’s something strange in the Poisson (or any deviation from normal) supports me in forcing me to “notice what [I] never expected to see.”

Students often start with the idea that it is not possible to be surprised if we focus on specific objectives. That there will be no serendipity if we are too constrained. Taken to an extreme, this leads to the model that a purely open-ended solution, one where no specific target is “pushed,” is the only reasonable way to lead to surprising findings. Well, maybe. But I think it’s far more likely that this leads to apophenia rather than serendipity. Even more likely, the analyst finds nothing.

Regardless, surprise is a relative term — one anchored on how our observations differ from our expectations. We come into an “exploratory” analysis with some conception of the data. This might be a probability distribution over all patterns (and there are far less than we think when we are talking about detectable visual patterns). For example, we expect to see patterns A, B, C (or some subset) but do not expect to see D, E, F. Walking into an analysis, we can certainly have different confidence in each of these expectations. The amount of surprise is how different what we see is from that expectation. Exploration then is the process of validating each of these expectations and finding which, and if, of our expectations were met.

The model that the universe of patterns is unbounded is odd to me. While superficially true, only a subset of those “look” like patterns to us (trends, seasonality, correlations, communities, outliers, etc.). EDA tools, in the way Tukey describes them, are built to target these — to help the eye find patterns of statistical interest. Even Tukey continues with: “Exploratory data analysis is detective work — in the purest sense — finding and revealing clues.”

I always feel a little bad about this. Eliminating “exploration” removes some of the romantic notions and “joy” of surprise. I joke that there must be some German word for Finding Joy in Clicking Random Links on Wikipedia. Certainly there’s joy in clicking on random filters in a visual analytics system or zooming around a large network. But in the end, the joy comes from occasionally finding something unexpected or confirming some internal hypothesis. In which case, shouldn’t we build visualizations to make those findings salient?

What about Framework Systems?

The problem with teaching students generic tools and techniques (e.g., Tableau, Parallel Coordinates, Lineup, etc.) is that they focus on the fact that these can be applied to many specific domain problems. When they suggest building one of these as a final project they often miss the point and suggest a wide array of abstract exploratory tasks, and never anchor the project with domain-based case studies.

Somehow their read of “framework systems” (and maybe this is my fault for teaching them in a certain way) is: if I want to build a generic tool it needs to support a diverse set of applications, and because I don’t know what those applications will be, I need to support open-ended exploration.

This is a somewhat broken model as it does not acknowledge that the tools and techniques are a solution to a wicked design problem. The systems are notable not only for which patterns they make salient, but for which ones they occlude. Parallel coordinates, Dust and Magnets, and Lineup are all broadly intended for multivariate datasets but each tool emphasizes certain patterns (and consequently tasks) through encoding choices and interactivity features. End-users pick specific tools because some tools are better at finding specific patterns and thus enabling the decisions they care about.

Again, by eliminating exploration, even students who work on broader frameworks are forced to contend with which specific “finding” tasks they want to support. It has the added benefit of forcing them to find real examples to use as case studies to validate these ideas.

Teaching Infovis

I’ve taught my infovis class for a number of years now and I’ve found that the projects have been much more successful once I banned exploration. I’m sure that it’s possible to guide the students to the same good projects with enough feedback and iteration. But this has become impractical for me given time constraints and the growth of the class.

Maybe this is obvious, but when I started teaching I thought that being more open-ended about what I allowed was better. That somehow it would lead to more diverse, cool, weird, and novel projects. In some ways that’s true, but as I’ve argued elsewhere, teaching infovis is itself a wicked design problem. Yes, I want creativity but I also need to make sure students learn key concepts and skills. Forcing the specific language has, I think, struck the right balance. I still get interesting (and occasionally weird) projects but project quality has increased as have the number of learning objectives students hit (comparing alternatives, arguing for one solution over another, designing validation).

So, no. No exploration. No exploratory. No exploring. No explore.

Thanks to Jessica Hullman for her feedback

--

--