Apple’s Personal Voice speaks volumes about the pace of technological change: nobody’s surprised by anything anymore
For the last few years I have been using examples of speech synthesis in my lectures and presentations: synthesized voices that sound like the real thing, deepfakes of myself or famous people, and many other examples. It’s an engaging way to illustrate the constant evolution of voice technology, as well as to start a conversation about the difficulties involved in regulating it.
Until now, creating a voice with certain characteristics or that sounded exactly like yours or someone else’s was a relatively simple task that involved recording a few sentences (although there were already methods to clone a voice with a few seconds of almost any speech), although, surprisingly, this was not something that had attracted the attention of the general public.
Now, Apple has just updated its Assistive Access to include Personal Voice, a feature allows users to replicate their voice, designed in principle for people affected by amyotrophic lateral sclerosis (ALS) and similar conditions that lead to loss of speech, but that will undoubtedly be used by many non-sufferers. A simple and relatively quick process (15 minutes) involves reading a few sentences to train the algorithm, which then creates an assistant that reproduces typed texts…