Politics of Adversarial Machine Learning

Adversarial machine-learning attacks and defenses have political dimensions

Ram Shankar Siva Kumar
Berkman Klein Center Collection
2 min readApr 23, 2020

--

In 2019, DARPA, the research wing of the US Department of Defense, announced a challenge to build robust ML defenses, saying, “We must ensure machine learning is safe and incapable of being deceived.” But DARPA never explicitly states safe from whom? And deceived by whom?

Image: Pixabay

Kendra Albert, Jonathon Penney, Bruce Schneier, and I explored these questions in a paper published at Towards Trustworthy ML: Rethinking Security and Privacy for ML, ICLR 2020 Workshop.

Although it is common within the adversarial machine learning community to view those who wish to interfere with the confidentiality, integrity, or availability of systems as “attackers,” this framing belies the fact that those who resist such systems could just as easily be pro-democracy protesters or academics interested in evaluating the inclusiveness of training data as they could be malicious actors.

Here are the top takeaways from our paper:

  1. Researchers must anticipate “desirable attacks” on machine learning (ML) systems. For instance, the perturbation attack that confused Tesla’s autopilot technology to swerve onto the wrong lane also powers the EqualAIs project for allowing individuals to make a certain image less likely to be detectable as a face.

To an ML system, an attacker motivated by a legitimate human rights or civil liberties concern with the system and an attacker motivated to harm others are the same.

2) The adversarial arms race may lead to the development and deployment of more invasive forms of surveillance especially when there are efforts harden ML systems against attacks without proper attention being paid to privacy concerns.

3) Drawing on lessons from regulating the commercial spyware industry, we recommend vendors and other ML-industry participants to commit to “human-rights-by-design” principles and prohibit clients from reconfiguring/hardening ML systems to resist attackers in contexts where human rights or civil liberties are at risk (e.g., protesters resisting facial recognition technologies deployed by an authoritarian government).

Our Call to Action: The adversarial machine learning community is putting forth tools and scholarship for attacks and defenses at breakneck speed. We ask them to learn from scholars of science and technology studies, anthropology, and critical race theory — as well as human rights and ethics literature more generally — and to be in conversation with protesters, researchers, and others who seek to attack systems for socially beneficial reasons.

Link to paper: https://arxiv.org/abs/2002.05648

--

--

Ram Shankar Siva Kumar
Berkman Klein Center Collection

Data Cowboy at Microsoft; Affiliate at Berkman Klein Center at Harvard