Stewart Smith
Jun 11, 2019 · 6 min read

I wrote the following article for AIGA’s Design Envy blog where it was first published on Friday, 11 November 2011—originally titled “Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies: Gallant Lab.” It concluded my week-long stint as Design Envy’s guest poster. In the nearly eight years since, Design Envy has fallen into disrepair and today the website is no longer functional. For that reason I’ve decided to re-post my minimally-updated article here for posterity. [Original post.]


The coming revolution will favor typographers. The Gallant Lab at UC Berkeley recently published a scientific paper and video titled “Reconstructing visual experiences from brain activity evoked by natural movies.” What the paper describes is a process for vaguely knowing what a person’s looking at by observing the blood flow in their brain. Let’s play with that idea a little bit.

Still from the film Eternal Sunshine of the Spotless Mind (2004).

Okay — so you look at an image. A scanner records your brain activity. A computer analyzes that scan and is able to reconstruct an approximation of the image you were looking at. How exactly does this voodoo work? If you’re familiar with the film Eternal Sunshine of the Spotless Mind you may recall that before Lacuna employees could erase a customer’s emotionally painful memories, they first had to scan the customer’s brain as it recalled those painful things. This scan created a literal map of precisely where in the brain those painful connections resided and therefore a map of what areas Lacuna would dutifully damage in to erasure later.

A similar principle applies in real life: Gallant Lab records the brain activity of participants as they view images, pairing each frame of visual input with a snapshot of the subject’s brain activity during that moment, creating a library of such pairings. The goal here is not to note the general location of activity—Gallant’s staff are already hyper-aware of where the visual cortex sits!—but to map the activity specific to a particular brain viewing a particular image. (Although human brains are all generally organized in the same fashion we are unique after all.) If the computer records that some random person—let’s call him David Cameron — shows particular activity patterns every time he sees the face of a pig, then the next time the computer detects a similar activity pattern we can assume that David Cameron is again looking at a pig face. (See also Black Mirror’s premiere episode, The National Anthem.)

A composite of pig faces.

Gallant Lab’s result images are combinations of pre-existing images. (This is visually similar to some of artist Jason Salavon’s work.) This is because the software can only imagine new images based on the images it used as input when recording the subject’s brain activity. What would you want in your library of base images? The more expansive the library the better, right? The point I’m unclear on is how much the act of remembering an image resembles actually seeing an image. (Warning: we’re getting speculative here.) What if you could close your eyes and imagine that pig face with enough clarity that the software would think you’re actually looking at a pig face? Keep that on the back burner for a minute while we move on.

Gallant Lab uses fMRI scanners to observe brain activity. (EEG does not deliver nearly the depth of data required for this task.) An fMRI machine isn’t exactly a mobile device! But suppose in the future you could shrink down an fMRI into something that is mobile — like a special hat that could pair with your smart phone. (Oh, we’re riding far out into the what-ifs now, but just roll with it.) And your smart phone already has a headphone jack so what we’d have here is a system that could read what you’re seeing, interpret it on your phone (or use a cell network to pass data to a server that would do the hard number crunching, like the Shazam app.) and then provide audio feedback. If you wanted to go further you might swap out the headphones for LCD glasses so your visual input would be matched by visual feedback. And if you wanted to go much further you might wonder if you can read visual activity from the brain, could you also write to it?

So far we’re riding a wave of techno-romanticism to come up with a fantasy device that reads both real and imagined visual input, can process and “understand” it, and respond in kind. Imagine a really brainy version of Skype where you just imagine the text you want to send and on the other side of the world your conversation partner receives this as an image overlay on their reality. No screens. No physical input. Well, that seems like an awful lot of time, effort, and money for a slightly more swish hands-free mode. (Or really, what a P300 Speller already does. Seriously.) If we’re aiming for visual telepathy we’d better be able to use the medium to its fullest.

The Bell Centennial typeface, known for its ink traps, designed by Matthew Carter in the 1970s.

No one is better suited to this future world of imagination-manifested text than typographers. It’s naive to assume that just re-using our existing typographic forms would be the most efficient solution for visual telepathy. Much like ink-traps in lead type we would need medium-conscious features in our glyphs to improve legibility both for ourselves and for our software interpreters. Looking over Gallant Lab’s result images it’s clear this is a fuzzy process. How robust would our new typography need to be in order to survive translation from imagination to pixels and back again? And more pressing: whose visual imagination is crisp enough to consistently construct the lines and curves with enough precision to be understood, even when their mind is exhausted? It’s analogous to children practicing their handwriting over and over until finally it’s passable for a broader audience. The hurdle isn’t the pencil or the pen. It’s the child’s ability to imagine and execute the curves. It’s not the technology, but the user.

Imagine a freelance typography gig in this future — meeting a client at a brain recording studio to sketch out the text for a new advertising campaign that will be broadcast straight to other people’s heads. You go through a few iterations and compensate for quirks in the technology — much like you would in a normal broadcast studio. And all of this sketching and finalizing would be infinitely quicker than using a mouse or Wacom tablet to navigate through traditional desktop software. Instead you just imagine the work — you perform the work using your imagination.

In this new Gallant-inspired world would we experience a return to design culture dominated by Paul Rand and Saul Bass types as the simplicity of form allows for higher fidelity telepathy? Would spies be trained to imagine highly illegible typography in order to mask their visual wanderings from some new brain-scanning CCTV? What is the nature of a “brain-ready” font license anyway? Maybe it’s time for some young intrepid typographers to get on the phone to places like Gallant Lab and start making the future legible!

Meet me in Montauk.


After writing my original article nearly eight years ago, I’ve had the pleasure of meeting Jack Gallant in person — and nearly the opportunity to work on some interesting ideas together. He is a fine fellow and I think my ridiculous ramblings here may have amused him and that brings me some joy.


Stewart Smith

Written by

Creative polymath making quantum simulations, virtual reality, and more in Brooklyn NY. http://stewartsmith.io

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade