Apple’s Personal Voice speaks volumes about the pace of technological change: nobody’s surprised by anything anymore

Enrique Dans
Enrique Dans
Published in
2 min readMay 19, 2023

--

IMAGE: The new feature from Apple, Live Speech and Personal Voice Advance Speech Accessibility, for persons who lose the ability to speak due to ALS or other conditions. In a smartphone’s screen, a phrase reading “Grabbing a cup of coffee this afternoon sounds great” and a sound wave over a text that read “Listening”
IMAGE: Modified from Apple

For the last few years I have been using examples of speech synthesis in my lectures and presentations: synthesized voices that sound like the real thing, deepfakes of myself or famous people, and many other examples. It’s an engaging way to illustrate the constant evolution of voice technology, as well as to start a conversation about the difficulties involved in regulating it.

Until now, creating a voice with certain characteristics or that sounded exactly like yours or someone else’s was a relatively simple task that involved recording a few sentences (although there were already methods to clone a voice with a few seconds of almost any speech), although, surprisingly, this was not something that had attracted the attention of the general public.

Now, Apple has just updated its Assistive Access to include Personal Voice, a feature allows users to replicate their voice, designed in principle for people affected by amyotrophic lateral sclerosis (ALS) and similar conditions that lead to loss of speech, but that will undoubtedly be used by many non-sufferers. A simple and relatively quick process (15 minutes) involves reading a few sentences to train the algorithm, which then creates an assistant that reproduces typed texts

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)