Today our former colleagues Ben Eater and Grant Sanderson published Visualizing quaternions, an exciting addition to a hazy new medium. It combines video-like narrative explanation, interactive representations, and game-like challenge prompts. At first, it feels like watching a YouTube video that uses great visualizations to explain something… but then you realize that you can interrupt the speaker to manipulate with the representation, and they intermittently prompt you to do so with some challenge.
How should we design such systems? What are their prospects? I’ll explore a few ideas here. I know of no common name for this medium, so I’ll call these narrated explorables. A few more examples (write me if you know another?):
Khan Academy’s computer programming lessons feature a live programming environment being manipulated by a narrator you can interrupt at any time. (Internally, we call these “talkies.”)
Our research project Cantor tried expanding this notion to objects on a 2D canvas, demonstrated here with an interactive number block manipulative (search the page for “Interrupting, answering, and elaborating othersʼ recordings”).
I’m excited about this medium for creating explanations:
- Relative to an interactive on its own (like PhET), narrated explorables add the emotional resonance of an invested speaker, plus challenge prompts which guide the learner’s experience.
- Relative to an interactive alongside text (like an explorable), narrated explorables synchronize narration and visualization, which directly connects what’s being discussed in a given sentence to some change in the representation—no hopping back and forth between figure and text.
- Relative to the video on its own (like a 3b1b video), narrated explorables support active learning: they push the learner to ask and answer questions of the representation instead of passively listening.
Because this medium is so underdeveloped, the design patterns aren’t at all clear. In fact, the creators of these narrative interactives seem to gravitate toward fairly distinct mental models. I’ll try to outline each in turn now.
It’s both charming and revealing: several times in the narration for Visualizing quaternions, Grant refers to the experience as an “applet.” That’s a loaded word! Applets have existed in education for decades, but PhET is a great modern representation.
The applet mental model for narrated explorables suggests that you’ve loaded up some interactive representation of some concept—say, a circuit simulator. The circuit simulator itself is inert. It doesn’t contain any narration or prompts; it’s designed to be a tool that faithfully represents the underlying system. But sitting next to you and looking at that same applet is a tutor, and as you pass the mouse back and forth, they talk you through how the applet works and explain something by manipulating the circuit simulator. When the tutor leaves, you’re left with the circuit simulator, which you’ll use for various projects in the future.
Or, another angle: it’s like the narrator loaded up the circuit simulator and screencasted themselves using it to explain something. Note how you see Grant’s cursor! You can pause the screencast and play with the circuit simulator; when you resume, the screencast continues oblivious to what you did.
The applet model particularly makes sense if the important piece is the interactive representation—if it’s an authentically useful tool going forward, and the narration’s really just there to set up the initial experience. I suspect this model makes less sense if the interactive representation is mostly useful during initial instruction: the clear separation between narration and the interactive leaves many opportunities on the table. We’ll discuss some below.
The great thing about 3blue1brown videos is that they often feature interesting visualizations. What if you thought you were watching one of those videos… but to your surprise, you found that you yourself could manipulate the visualization Grant was describing? As he iterated on Visualizing quaternions, this was the mental model Ben kept articulating for me.
We see this mental model play out most directly through the literal video playback controls. It also means that when Grant asks you to do something with the interactive, the narrative track doesn’t wait for you.
One thing I love about this mental model is that it takes the combined medium very seriously. It asks: why can’t every 3blue1brown video be like this in its entirety? Why can’t we bring all the π characters and title cards and visual storytelling panache into the interactive environment? Why can’t we have the narrator indicate elements on the screen by wiggling them, like in the videos—rather than by gesturing with a recorded mouse cursor, as the applet mental model suggests? Those are great questions!
I think this mental model makes great sense when most learners will passively consume most of the experience as if it were a video. It creates a smooth ramp for those with additional curiosity to satisfy it at any moment, but perhaps when most people interact, it’ll be to jump back 10 seconds to hear a piece of the explanation again—not to manipulate the interactive themselves.
In my own experiments with narrated explorables, I’ve often drawn on video games as a mental model.
In video games, players often share the world with virtual characters, and those characters often speak aloud while manipulating the interactive game environment you share. Game narratives don’t have a playback scrubber; instead, they might have a replay/skip button and a chapter list.
Game characters often give you challenges and react when you complete them, sometimes even changing their narrative accordingly. Game narratives wouldn’t prompt you to do something and then keep talking a few seconds later; instead, they’d wait expectantly for your action and respond accordingly.
More importantly, they might reflect your current challenge in the interface and offer feedback on your progress towards it. They might highlight elements in the environment which would help you complete that challenge; if you continued to struggle, they might offer extra advice. Indeed, the full complexity of the environment is often not all revealed at once. The game might expect players to complete some challenges to continue, while it might offer others as optional tangents, always making it clear how to get back to the narrative “happy path.”
This type of mental model seems stronger when we expect most learners to be actively interacting with the experience throughout it, just as in a game.
Zooming out, I’m particularly excited about this mental model because it connects to an old notion in edtech history that still feels underexplored to me: guided discovery in microworlds. Microworlds, as originally described by Papert, are meant to be simple, useful, and general sandboxes for discovering ideas in a particular domain, designed with points of entry accessible to new learners. Guided discovery add a bit of structure to open-ended discovery learning. For instance, learners might begin in a constrained environment, then those constraints might loosen as they become more fluent—just like in many video games.
But discovery learning has a checkered empirical history: it seems many learners may need plenty of guidance. Narration—especially outstanding personal narration like Grant’s—offers an extra layer of guidance that may make microworlds more practical in many contexts.
Irrespective of their sparring mental models, I’m tremendously excited about Ben and Grant’s work with Visualizing quaternions. What will the next narrated explorable be? Are there more interesting mental models to consider—perhaps which leave behind reference to any prior media?