Visualizing Racism, Enhancing Perception, and Explaining Machine Learning: Reflections on Openvisconf 2018

Mike Freeman
6 min readMay 20, 2018

--

This year’s openvisconf provided a venue for pressing conversations in the visualization domain. While many conferences fall into a trap of only looking inward — celebrating successes that have little effect on anyone outside of their domain, debating inane distinctions between design tactics — this conference seemed to pull off something more connected to the purpose and growth of our field.

While these two days were not without their (appropriately celebrated) wonky discussions (amazing, @enjalot!) or frivolous animations , there seemed to be a genuine discussion about applications of the field of visualization to ongoing issues of social equity and the promotion of open science. My experience at the conference was, of course, dictated by who I talked to, my racial/gender/national/sexual/class identity, and other factors, so I don’t mean to claim that it was a universal or universally positive experience. That being said, I found the following discussions worth reflecting on and sharing.

Visualizing Racism

Aaron Williams talk on “How data, and the visualization of it, helps us understand ‘us’” took on measuring and expressing racial segregation in the United States. The talk began with an historical take at how visualization has expressed race data over time, largely pioneered by W.E.B. Du Bois.

W.E.B. Du Bois’ visualization of urban/rural populations, from the Library of Congress

William’s 21st century take on visualizing race in the U.S. appeared as a captivating piece in the Washington Post this May. In the piece, he extends former visualizations on diversity to delve deeper into metrics of segregation, a more nuanced take on the topic:

Visualizing segregation, by Aaron Williams (from the Washington Post)

Not only was his work beautiful and clear, he dared to talk unabashedly about the role of visualization in an unjust society. It was inspiring to see someone so directly remind us of the (potential) purpose of our work. This was a clear call to action to the audience, challenging them to not only channel their skills towards maximizing profits but also towards discovering and un-do injustices.

Slide from Aaron William’s talk

In a similar vein, Amanda Cox and Kevin Quealy’s “Disagreements” talk — a favorite amongst many attendees — highlighted recent work on racism in America. While they drove the discussion with a set of snarky and endearing interpersonal debates from their years of collaboration, at the center of their disagreements was a common goal — a commitment to crafting honest and impactful visual designs.

Intergenerational wealth shifts, from The Upshot

The intensity and granularity of their Disagreements expressed their passion for using visualization as a tool to drive understanding and compassion around evolving political discussions. For example, their quippy debates about variation in visual form were grounded in a concern about accessing and interpreting complex data in their article on the Punishing
Reach of Racism for Black Boys
. The final product not only effectively expressed the data, but crafted and emotionally engaging piece that shows how the wealth of Black men drops in a racist society.

Federica Fragapane’s work also centered issues of race, particularly looking at immigration into Italy. In her work The Stories Behind a Line, she took on the challenge of humanizing data to show the deeply personal and complex paths towards immigration.

One immigrant’s journey to Italy, featured in Stories Behind a Line

Given the purpose of visualization so well expressed by these talks, conversations about how to make visualization better seemed all the more important.

Enhancing Perception

In the context of such pertinent work, discussions about how to make visualization better seemed crucial, rather than frivolous. This was taken on most directly by Steven Franconeri, whose research delved into the preattentive perceptual processing used to identify patterns in visualization.

Tracking eye movements in visual comparisons, from Steven Franconeri’s talk (downloaded from Twitter)

While a minor delay in eye movements may seem trivial, ensuring rapid understanding is needed given the limited attention of audiences.

Research into visual communication was further echoed by Heather Krause who investigated common fallacies in data analysis and visual communications. This talk was more concerned with the analysis rather than visualization of data. She noted various fallacies that distort interpretations by both analysts and their audiences.

Fallacies presented by Heather Krause, (downloaded from Twitter)

Using visualization as a tool for understanding research was further explored by the talks around Machine Learning.

Explaining Machine Learning

Data visualization has been used not only as a tool for understanding data, but for understanding the things we do with data (i.e., data science). In this way, visualization plays a pertinent role in making data science techniques intuitive and interpretable. Because these algorithms can influence anything from who goes to jail to whose food stamps get accepted, understanding how they work is important for everyone (for more on Data Violence, see this robust work by Anna Lauren Hoffmann).

Prior work in explaining machine learning, by Stephanie Yee and Tony Chu

This year’s Openvisconf featured a number of excellent visual explanations of data science techniques. In Matthew Kay’s talk on Visualizing Uncertainty, he crisply and clearly delineated between Bayesian and Frequentist methods. In doing so, he took care to explain not only how to visualize uncertainty, but why we have uncertainty is the estimates we generate.

Uncertainty generated from Bayesian and Frequentist approaches, by Matthew Kay (see slides)

This type of statistical literacy is becoming increasingly important so that people can understand the quantitative information being presented to them. The importance of understanding uncertainty became abundantly clear in November 2016.

New York Times election needle, accessed on Google Images

Shan Carter furthered this discussion of peeling back the layers of Machine Learning in his presentation. In the past year, distill.pub has championed visual explanations of machine learning, which Shan Carter described in great depth. While some of the internal mechanics of Neural Networks remained obtuse (to me, at least), this talk showcased some of the pressing research that exposes the complexities of image recognition.

What a Neural Network Sees — presented by Shan Carter, featured here

(In the vein of open science, all of my workshop materials teaching D3 users how to use React to scaffold their visualizations can be found here.)

All that said — perhaps I just saw what I wanted to see (or heard what I wanted to hear). I’m heartened to see the topics people are tackling with visualization, and humbled by the impressive ways they’re doing it. A major thanks to the speakers, the program director Lynn Cherny, and the program committee that pulled this all together. Already looking forward to next year!

--

--

Mike Freeman

Faculty at @UW_iSchool teaching #datavis, #rstats, #webdev, and their impacts on society. Views are my own. He/him.