How to Give Voice to the Speechless

Listen to, and translate, their brainwaves

The Economist
3 min readApr 26, 2019
Photo: Frederick M. Brown/Getty Images

Of the many memorable things about Stephen Hawking, perhaps the most memorable of all was his conversation. The amyotrophic lateral sclerosis that confined him to a wheelchair also stopped him talking, so instead a computer synthesised what became a world-famous voice.

It was, though, a laborious process. Hawking had to twitch a muscle in his cheek to control a computer that helped him build up sentences, word by word. Others who have lost the ability to speak because of disease, or a stroke, can similarly use head or eye movements to control computer cursors to select letters and spell out words. But, at their best, users of these methods struggle to produce more than ten words a minute. That is far slower than the average rate of natural speech, around 150 words a minute.

A better way to communicate would be to read the brain of a paralysed person directly and then translate those readings into synthetic speech. And a study published in Nature this week, by Edward Chang, a neurosurgeon at the University of California, San Francisco, describes just such a technique. Speaking requires the precise control of almost 100 muscles in the lips, jaw, tongue and throat to produce the characteristic breaths and sounds that make up sentences. By measuring the brain…

--

--

The Economist

Insight and opinion on international news, politics, business, finance, science, technology, books and arts.