The man who helped to preserve Stephen Hawking’s iconic voice
Self-confessed geek and computer officer Peter Benie began programming computers when he was ten. A chance meeting in 2009 led to him playing a key part in the upgrading of the world’s most famous synthesised voice. The task involved thinking like a computer.
Our ability to recognise voices is extraordinary. And one of the most recognisable voices on the planet is Stephen Hawking’s. Shortly before he died, Stephen approved and used a new version of his iconic voice that precisely replicated it. I was part of the team that developed this updated version by mirroring the original hardware.
Stephen Hawking lost his ability to speak in the 1980s. The voice that became Stephen’s trademark was a speech synthesiser which he operated, by blinking, from a screen fitted to his chair. Advanced at its time, its robotic qualities became Stephen’s trademark. This was the voice we heard talking about black holes and cosmology.
Everything physical eventually wears out. After 30 years in use, the hardware was approaching the end of its life. Other much more sophisticated speech synthesiser systems had been developed — but Stephen remained steadfastly loyal to what had become his authentic voice. He didn’t want it changed in any way.
My involvement was a matter of chance. In 2009 I met Stephen’s graduate student, Sam Blackburn, at the Centre for Mathematical Studies. I was an IT manager working for the Department of Engineering and Sam was Stephen’s Graduate Assistant. During a conversation about something mundane, Sam explained that the hardware Stephen used was failing.
When geeks get together, the conversation quickly turns to technology. Soon we were talking about how to create software to emulate the original voice system. This knotty problem is just the kind of puzzle I like solving. And I knew that in taking it on I’d have the bonus of working with some really bright people.
Modern synthesised voices sound more fluent. The hardware that made Stephen’s voice was at the limits of technology when it was new. It needed a microprocessor and a specialist digital signal processor (DSP) but was only just capable of producing the voice. Modern computers are orders of magnitude faster so they can run a much more sophisticated model of a voice.
The project to upgrade Stephen’s voice went far wider than Cambridge. We had help from some people in the USA, Eric Dorsey and Patti Price, who worked on the original voice system. They analysed the software voice and compared it against the original hardware to check that they really were the same. We also got a spare hardware board from its designer, Hari Vyas — it had been sitting in his garage gathering dust.
It was my task to develop the software emulator. This meant writing thousands of lines of code required to simulate the hardware of the original system. That’s not very much but we did it without schematics for the hardware or source code for the software.
All we had is the machine code — a form suitable for computers, not people. By careful examination, I figured out its overall program structure, and then looked at the detail of how it interacted with the outside world to infer how the hardware must work. To use a very simple analogy, it’s a bit like working out exactly how a cake has been cooked to arrive at precisely the same taste and texture.
The project was start-stop-start. One of the key members of the team, John Peatfield, sadly died in 2012. He worked on one part of the hardware — the program for the DSP.
In 2017, a brilliant student called Pawel Wozniak joined us. We were able to continue. He was an engineering undergraduate at Huddersfield University, doing an internship at Intel, and got in touch to offer his electronics skills. His contribution was to take measurements from the hardware so we could verify that the software emulation of the hardware matched the real thing.
I met Stephen only once. That was when we demonstrated the fully-working software voice. We had a prototype that sounded right-ish just before Christmas 2017, but we knew Stephen had very high standards and we wanted to be completely sure that he would accept the voice. We held off demonstrating it until it was exactly right.
In January this year, we demonstrated our latest version. We’d measured it and confirmed that it really is the same as the original voice. Stephen liked it — and, naturally, we were thrilled.
In March Stephen died and the world lost an incredible individual. He never got the chance to speak in public using his upgraded system but he used it right up until he died. I’m proud to be part of the team that preserved his voice. The voice is part of his identity — you wouldn’t use it for someone else — so effectively the voice died with him.
I was born 32 years after Stephen. When he was born, in 1942, computers were large electromechanical devices. When I was born, in 1974, the microprocessor revolution had just started.
Around the time that Stephen was losing his voice, I got my first computer. I was seven when my dad gave me a Sinclair ZX81, which had 1K of memory. Modern computers have over a million times as much memory. It was an inspired present.
When you turn it on, you get a prompt that invites you to program it. I was completely absorbed by it. School, on the other hand, was a waste of time. Maths at school was pages and pages of boring sums, and I couldn’t stand it.
During my childhood we moved house several times. I never liked moving but it did me a huge amount of good. Each time I started a new school the teachers assumed I could do things that I thought I couldn’t — and I invariably found that I could. I just needed the teachers to believe in me.
My interest in maths started when I was on holiday. I must have been about 11 when I found a textbook on trigonometry in among some comics at my grandparents’ house. It clicked with me. One of my teachers figured out I knew more than I was letting on and gave me extra tuition. I got a place at Downing College to read maths.
My undergraduate years weren’t especially happy. Cambridge colleges are sociable places and at that age I wasn’t good at mixing with people. My academic grades slipped because I was spending too much time with computers and became isolated from my peers.
The world’s not designed for people like me. If you’re good with people, but not good at maths or English or geography, there’s an entire education system designed to bring your weaker skills up to standard. If you’re good academically, but not good with people, you’re left to figure it out yourself. Which you can do, but it takes longer that way.
I’ve remained in academia with one spell in the commercial sector. My job as computer officer in the Department of Engineering involves me in a constantly shifting range of challenging projects and I work with many brilliant people. A lot of the people here think in the same way that I do, so I fit in quite well.
Becoming better at something is a question of focus. That and the number of hours of practise you put in. I’ve got a very technical brain. I’ve got the ability to get absorbed in something and cut off from everything around me. I’m hopeless at remembering people’s names but I can recall computer programs I wrote years ago.
Recently I’ve begun to play the piano again. It’s a wonderful way of putting everything else out of your head. I’ve bought a really nice piano. People say it’s the pianist that matters not the piano. That’s rubbish — a good piano makes you want to play it, and that lets you put the hours in to become good at it.
This profile is part of our This Cambridge Life series.