Waiting for an AI “Snow Fall”

AI needs to augment what content creators are already doing, not offer a low-effort way of producing an inferior outcome.

Carl Alviani
Protagonist Studio
3 min readAug 11, 2023

--

An animated 3D map of the Stevens Pass ski area in Washington state.

In 2012, the New York Times published Snow Fall, a multimedia story/documentary thing that blew up in the media and journalism world. For a short while, it had earned so much attention that it turned into its own shorthand verb, as in “Can we Snowfall this? We’ve got the video assets…” The prediction was that this would transform digital journalism, and maybe other kinds of online media as well. The experience was so engaging, and brought such great readership numbers…how could it not succeed?

Ten years later, the rich content treatment pioneered in Snow Fall is still very much the exception. A few big newspapers and magazines do something similarly ambitious a few times a year, but it’s incredibly labor intensive. You have to write the story, of course, but you also have to incorporate several different types of media in a way that feels natural, and actually adds to the story rather than making it unreadable.

Now this is one area where AI content generation could change things quite a lot. Writing a story that has a point of view, forms and supports an argument, appeals to shared experience — that’s really hard for an algorithm. But taking different types of content and putting them into a structure that fits an established pattern? That’s AI’s wheelhouse.

To give a wildly oversimplified example, imagine that we picked apart Snow Fall or one of its more recent cousins, and mapped it out: this many sections, with this many words, a video loop here, a slide show there, and so on. This becomes a smart template, with flexible length and compositions, but some underlying rules about what types of content go where, and enough intelligence to read text and spot opportunities for augmenting it with other elements.

Now a writer — yes, a human one — sits down with her notes and starts outlining a story. She drops a big folder of images and video into the template as well, and starts writing. And as she writes in one window, a prototype rich-content version starts populating in another. It’s wrong, obviously, and needs a lot of tweaking and adjusting, but it’s something, and it’s not terrible.

As the writer writes, she can watch how her text plays with the other elements, and spot opportunities for leveraging them. Maybe she can trim this paragraph back and add annotations to that map instead. The AI suggests structural shifts and proposes relevant content from the folder she dumped in, all of which can be dismissed, accepted, or modified.

What we’ve made here is essentially a WYSIWYG word processor for rich multimedia. It doesn’t generate beautiful interactive experiences at the click of a button, but it does remove friction from the process, and there’s plenty of precedent to suggest that’s enough.

Now, who wants to make it?

--

--

Carl Alviani
Protagonist Studio

Writer and UX strategist. Founder of Protagonist Studio. Obsessed with design’s hidden consequences. Living in Glasgow, with my heart in the PacNW.