Whisper: Reimagining Hearing

Mike Vernal
Sequoia Capital Publication
3 min readOct 15, 2020
Whisper co-founders, Shlomo Zippel and Dwight Crow

The past decade has seen a revolution in the ways that computers perceive the world. Computer vision is reshaping industries like retail and transportation. For many tasks, computers are already better than humans at “seeing” and only getting better.

What about hearing?

As it turns out, computers are pretty good at that too (and getting better very quickly). That is the core insight behind Whisper, which is launching the Whisper Hearing System today.

In early 2017, we reconnected with Dwight Crow, who had been an early engineer at Arista Networks, a YC founder and then a PM at Facebook. He was working out of his bedroom on a new project, which was a total mess of computer parts, PCBs and soldering equipment.

After studying recent advances in machine learning, he realized that there was just as much opportunity in the audio domain as there was in the visual. He played us a noisy, unintelligible clip of people talking in a restaurant. He then passed the same clip through an AI-based filter that clarified the voices and removed the background noise. The improvement was amazing.

While the easier path might have been to apply this technology to voice assistants or transcription tools, Dwight instead went after a less-obvious but much more meaningful opportunity — hearing aids.

The sad truth is that most hearing aids are disappointing. My father has owned a hearing aid for a decade (and probably needed one for twice that time), but he almost never wears it. It’s loud and uncomfortable. He constantly faces two terrible choices — keep it in and be fatigued by loud background noise or turn if off and only catch snippets of our conversations. He usually picks the latter.

The core problem is two-fold. First, most hearing aids are like high-end sound systems with a fancy equalizer. An audiologist can help discover where your hearing is impaired and tune the hearing aid accordingly, but that tuning amplifies all the sounds at that frequency. Unfortunately, an air conditioning unit sounds a lot like the “sh” phoneme, meaning that most hearing aids will make your A/C maddeningly loud.

Second, hearing aids are static. Once they are fitted and sold, they remain the same forever. They don’t learn about you and your surroundings. They don’t get better every week like modern software and hardware. They stay the same until you buy a new, static device a few years later.

Whisper takes a completely different approach. Instead of only focusing on frequencies, the Whisper system analyzes sounds (voices, music, background noise, etc.) and amplifies the sounds you want to hear while suppressing others. Using a chip with 1,000 times the power of your typical hearing aid and a connection to your phone, the Whisper Hearing System receives continual software updates to get better and better over time.

Over the past three years Andrew Song, Shlomo Zippel and Dwight have assembled a world-class team from Apple, Facebook, Nest, startups and industry incumbents to rethink the way that hearing aids work. Building a robust, embedded system in a mature industry that runs state-of-the-art machine learning is not an easy task. The team has done an incredible job.

It’s truly rare to find an exceptional team going after a great market with a meaningful mission. We are thrilled to be working with this team. Congratulations on the launch!

--

--

Mike Vernal
Sequoia Capital Publication

Sabbatical. Investor in @Rippling, @NotionHQ, @StarkWareLtd, @StatsigIO, @ClayRunHQ, @deno_land, @Threads, more. Product/Engineering at @Facebook, Microsoft.