Article: Research Debt (Distill)

Google Brain’s Chris Olah and Shan Carter on the middling exposition in ML research and the need for distillation

Jacob Younan
AI From Scratch
4 min readMar 27, 2017

--

First off, many thanks to Denny Britz’s The Wild Week in AI and Rob May’s Technically Sentient for featuring Distill’s launch in their most recent newsletters. I’m a very satisfied subscriber of both.

Before I touch on the article referenced in the title, you should get some quick background on the ‘what’ of Distill. The best way to do this is Y Combinator’s announcement and the About page of the website:

If you read the above, great. If you skipped it, the heavily summarized version is that Distill is both an ML journal and a digital publishing framework. The goal is to get researchers out of the static, outdated world of PDF publishing and into a dynamic web-based format complete with interactive visuals, responsive layouts and links among many other features. It’s a step towards ending the painful irony of bleeding edge technology research being communicated through decades’ old means.

A few days ago, I wrote that Two Minute Papers should serve as inspiration to researchers to find more engaging means of communicating their findings. While Distill isn’t producing videos (yet), they are addressing the same pain point: continuing to publish research in this format at this pace is inefficient and may even be unsustainable.

This where Chris and Shan’s piece entitled ‘Research Debt’ crystallizes a problem with ML research publishing and learning. It’s the ‘why’ of Distill:

They explain the concept of ‘debt’ through the lens of two parties: the explainer and the audience. As I understand it, the crux of this problem in ML research right now is that both explaining and reading pieces is increasingly labor-intensive. Several factors drive this labor increase for explainers and their audience:

  1. The amount of quality new research being published is increasing
  2. Nearly all research builds on the growing body of research before it (Chris and Shan use a great mountain climbing metaphor to explain this)
  3. The standard means of explaining research is an academic study in a PDF whose content isn’t easy to comprehend for several reasons

The first two are situational factors outside of their control. The third is the sub-standard tool researchers are using in an attempt to cope with the first two. This problem is exaggerated by the fact that everyone in the industry is both an explainer and the audience.

So why is there ‘debt’ in this ecosystem and why is it a problem?

It’s too hard for explainers to do a great job, so instead they create an explanation that is more difficult to understand for the audience. My interpretation is that the audience agrees to lend their labor in exchange for understanding the findings. For the audience, this exchange should reduce the labor required to understand future research, thus paying them back for their initial loan.

The problem is the explainers keep asking the audience for loans and the audience simply doesn’t have the capacity, because it’s not getting any easier to understand the continuous stream of research. In plain English: it’s too damn hard for everyone to keep up. The result is people begin missing things or consciously narrowing their focus to cope with the labor overload — a bankruptcy in this metaphor.

In many cases, this is why people like me (and those far more informed than I am) are always scouring the internet for clearer explanations given by people other than the authors.

The Editors at Distill conveniently term the process of effectively explaining research ‘distillation’, and want to empower research authors with the tools to distill better from the start. No more labor loans required.

I find myself frequently citing explanations like DeepMind’s WaveNet post rather than the underlying research papers. The posts aren’t replacements, but they’re much easier places to start and are a great example of what it means to distill. It’s clear not every researcher has tools or design resources to build posts like this themselves, but you really don’t need to be Google either. It should be totally within the capabilities of a publisher to replicate and improve on this format. (Update: OpenAI just completely redesigned their site, and they’ve got a bunch of posts in a similar format)

Clearly, I’m a big fan of this idea, and it’s not even meant for me. Distill is being developed to solve a problem within the research community, but there are obvious halo affects for beginners in the field and media outlets attempting to further distill breakthroughs for the broader public.

As a beginner and someone concerned with the broader public’s capacity to comprehend the impacts of AI, I hope to see the research community embrace initiatives like Distill.

To borrow from the mountain metaphor, a clear path is helpful no matter where you are on your ascent.

Credit: Illustration by Shan Carter. I’m usually one of those first few dudes at the bottom.

--

--