The First Comprehensive and Systematic Study of the Adversarial Attacks on Speaker Recognition Systems

Adversarial Attacks on Speaker Recognition(SR) Systems

Christopher Dossman
AI³ | Theory, Practice, Business
3 min readNov 11, 2019

--

This research summary is just one of many that are distributed weekly on the AI scholar newsletter. To start receiving the weekly newsletter, sign up here.

Speaker recognition is an automatic technique to help identify a person from words that contain a speaker’s audio characteristics. Today, speaker recognition systems (SRSs) are everywhere from personal smart devices, biometric systems, home appliances and more. But there’s a problem, ML techniques which are the mainstream methods for implementing SRSs are vulnerable to adversarial attacks.

This is the reason it is mission-critical to understand the security implications of SRSs under adversarial attacks. And, while there has been progress on adversarial attacks for speech recognition systems, many are ineffective on speaker recognition systems.

FAKEBOB: Adversarial Attacks on Speaker Recognition Systems

This new research investigates the adversarial attack on all the three tasks of SRSs in the practical black-box setting, in an attempt to understand the security weakness of SRSs under adversarial attack in practice. Researchers propose an adversarial attack, named FAKEBOB, to craft adversarial samples.

Overview of a Typical Speaker Recognition System

They formulate the adversarial sample generation as an optimization problem, incorporated with the confidence of adversarial samples and maximal distortion to balance between the strength and imperceptibility of adversarial voices and demonstrate that FAKEBOB achieves close to 100% targeted attack success rate on both open-source and commercial systems.

Overview of FAKEBOB attack

They further demonstrate that FAKEBOB is also effective on both open-source and commercial systems when playing over the air in the physical world. Additionally, they have conducted a human study which revealed that it is hard for a human to differentiate the speakers of the original and adversarial voices.

Attack on Open-set Identification Systems

Potential Uses and Effects

What the researchers have done in this work is praiseworthy. FAKEBOB was evaluated on all the three recognition tasks in 13 attack scenarios and achieved a close to 100% targeted attack success rate on systems.

Most importantly, this work also demonstrates that the three promising defense methods for adversarial attacks from the speech recognition domain are ineffective on SRSs against FAKEBOB. The research highlights and findings on the security implications of adversarial attacks on SRSs thus call for more effective defense methods to better secure SRSs against such practical adversarial attacks.

Read more: Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems

Thanks for reading, comment, share & let’s connect on Twitter, LinkedIn, and Facebook. Stay updated with the latest AI research developments, news, resources, tools, and more by subscribing to our weekly AI Scholar Newsletter for free! Subscribe here Remember to 👏 if you enjoyed this article. Cheers!

--

--