Reciprocal Podcasting: Scratching the Surface of Audio Storytelling and Interactivity

Jordan Wirfs-Brock
4 min readSep 12, 2016

--

Listening to audio is captivating and intimate. But podcasting, as it is now, is a passive experience — whereas deep learning and engagement rely on active experiences. Podcasting on sensor-laden smartphones is an untapped opportunity for truly transformative media.

But what if audiences could participate in podcasts by reacting to content — which in turn reacts to them?

I’m working on a project, funded by Illinois Humanities, that aims to investigate this question.

The Vision

Imagine: You are listening to a podcast on your smartphone about fear — a character-driven story on the evolutionary biology behind fear and how it manifests itself in the body.

The host asks you to put your finger over the flashlight on the back of your phone. The audio transitions to quiet, calm white noise. You wait. Suddenly you hear a deafening crash of glass, some angered shouting. Your heart races. You look at your phone. It displays a graph of your heart rate data so you can see it spike in real time.

The host continues the narration, including other sounds to inspire fear: growling animals, sirens, a scream. Based on data your phone is collecting in real-time, the discussion turns to your fears. How do you respond, and how does that compare to how others have responded? You see new insights about yourself, and you learn to empathize with the fears of others because you can see the commonalities and differences.

You are asked if you would like to contribute your data, anonymously, to a crowd-sourced research project. You say yes. By the end of the podcast, you’ve learned about fear in an intimate way and contributed valuable data to fear research.

Reciprocal Podcasting: A New Interactive Audio Medium

What I’ve just described is new medium at the intersection of audio storytelling, sensor journalism and engagement. It will marry the interactivity of a chat bot with the personal intimacy of a podcast. Combined, these form a kind of augmented reality for podcasting. Data collection and real-time feedback open doors for audiences to become aware of things that they don’t normally notice in their everyday lives, uncovering insights about themselves, their interpersonal interactions and society. This new type of storytelling could be used to create podcasts about…

  • The malleability of the concept of time and language’s influence on time and space. Listeners are asked to guess how much time has elapsed in different scenarios that reveal the accuracy (or inaccuracy) of their intuition of time.
  • Personal energy use in cars, homes and neighborhoods. Listeners monitor their energy consumption in real time or search for heat leaks in their homes.
  • Comedy. In a twist on highly popular comedy podcasts, listeners are presented with a joke that evolves based on whether or not the phone’s sensors detect laughter. (Think SAT/GRE questions whose difficulty adapts to right/wrong answers, only more fun.)
  • The “healthiness” of the built environment. Listeners measure indoor air quality, natural sunlight, and other factors that influence mental and physical health. Based on sensor input, the host guides the listener into spaces that represent the environments of people throughout the world, building empathy and a shared experience.

Podcasts provide a personal, private listening experience. But they could also be used to put the listener in another’s shoes and create an experience that generates empathy. What if the audience could also participate in the podcast by contributing data and reacting to content — which could in turn react to them? Listeners could learn new things about themselves and how they interact with the world around them while contributing to an emergent data community.

The Project

Realizing the vision above is more than I have time or resources to complete in the next few months. But I do have time to start cracking on some of the underlying questions behind interactive podcasting for (a) listeners, and (b) storytellers. And I’ll do so through user-testing, interviews and rapid prototyping.

Questions for podcast listeners:

  • How do listeners react to audio content that reacts to them?
  • Do listeners find it personable, creepy, something else?
  • What is the appropriate level of interaction (how much is too much)?

Questions for audio storytellers:

  • How will content creators adapt to this type of storytelling?
  • How hard is it to tell a truly interactive, non-linear story?
  • What goes into learning this new skill?

Over the next few months, I’ll be working with the CUTGroup in Chicago to do testing with podcast listeners (and potential listeners) to understand their listening habits and how they might respond to audio interactivity.

I’ll also be talking to audio producers about how their storytelling techniques might be different if they could talk to their audiences, in real time. I’ll also be working with storytellers to pilot some content that might work on a reciprocal podcasting platform.

Next, I’ll be making some prototypes of how reciprocal podcasting might work — both in terms of technology/user interface and in terms of content.

Then, I’ll take those prototypes back to the user testers to see how they react.

Have thoughts? Ideas? Think this is cool? Crazy? Ill-advised? Whatever it is, I’d love to hear from you. Thanks!

--

--

Jordan Wirfs-Brock

Human-computer interaction researcher, designer & educator using data as a creative material; CS professor @ Whitman College; recovering journalist; runner.