The time has come to ban robocalls using AI-generated voices

Enrique Dans
Enrique Dans
Published in
2 min readFeb 2, 2024

--

IMAGE: A text in black reading “Phone scams” with a forbidden sign in red on top
IMAGE: Paco Silva — Pixabay

In response to the growing use of the voices of celebrities in robocalls, the US Federal Communications Commission (FCC) is set to ban generative artificial intelligence for voice synthesis as part of telephone marketing campaigns.

A week ago, a campaign of fraudulent calls using the sinthetized voice of Joe Biden tried to convince between 5,000 and 25,000 voters in New Hampshire not to vote in the north-eastern state’s primaries. The origin of the campaign is unknown, but the audios with the president’s voice were an algorithmically generated deepfake.

Cloning a voice from a few samples is child’s play. From the first algorithms that required several minutes of reading specific sentences, we have moved on to others that simply need a few seconds of a voice to be able to copy it. In the case of public figures, since it is usually extremely easy to obtain samples of them speaking, campaigns have been found in many cases that aim to use them as a way to generate trust in the interlocutor. By marrying that technology with generative algorithms, we can come up with reasonably convincing conversational algorithms.

In some cases, these calls using cloned voices are being used to trick people into thinking that a family member is in trouble and needs money. Sometimes the intention is to mislead…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)