Legal Risks of Adversarial Machine Learning Research

Adversarial machine learning (ML), the study of subverting ML systems, is moving at a rapid pace. Researchers have written more than 2,000 papers examining this phenomenon in the last 6 years. This research has real-world consequences. Researchers have used adversarial ML techniques to identify flaws in Facebook’s micro-targeting ad platform, expose vulnerabilities in Tesla’s self driving cars, replicate ML models hosted in Microsoft, Google and IBM, and evade anti-virus engines.

Studying or testing the security of any operational system potentially runs afoul of the Computer Fraud and Abuse Act (CFAA), the primary federal statute that creates liability for hacking. The broad scope of the CFAA has been heavily criticized, with security researchers among the most vocal. They argue the CFAA — with its rigid requirements and heavy penalties — has a chilling effect on security research. Adversarial ML security research is no different.

In a new paper, Jonathon Penney, Bruce Schneier, Kendra Albert, and I examine the potential legal risks to adversarial Machine Learning researchers when they attack ML systems and the implications of the upcoming U.S. Supreme Court case Van Buren v. United States for the adversarial ML field. This work was published at the Law and Machine Learning Workshop held at 2020 International Conference on Machine Learning (ICML).

In the paper, we consider two CFAA sections particularly relevant to adversarial machine learning.

  • First, intentionally accessing a computer “without authorization” or in a way that “exceeds authorized access” and as a result obtains “any information” on a “protected computer” (section 1030(a)(2)(C)). This landscape is particularly complex and confusing given the current circuit split.
CFAA Interpretation By Circuit Court Region
  • Second, intentionally causing “damage” to a “protected computer” without authorization by “knowingly” transmitting a “program, information, code, or command” (section 1030(a)(5)(A)).

Takeaways from the paper

  1. Using the example of a ML service whose rules are based on Google API’s Terms of Service (TOS), we considered a range of adversarial ML attacks in light of the CFAA. We show how taking into account the circuit split on how section 1030(a)(2) should be interpreted, and 1030(a)(5)(A), whether the researcher is committing a violation or not.
Adversarial ML Legal Risks

2. If ML security researchers and industry actors cannot rely on expansive TOSs to deter against certain forms of adversarial attacks, it will provide a powerful incentive to develop more robust technological and code-based protections. And with a more narrow construction of the CFAA, ML security researchers are more likely to be conducting tests and other exploratory work on ML systems, again leading to better security in the long term.

Link to full paper: https://arxiv.org/abs/2006.16179

Berkman Klein Center Collection

Insights from the Berkman Klein community about how…

Berkman Klein Center Collection

Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)

Ram Shankar Siva Kumar

Written by

Data Cowboy at Microsoft; Affiliate at Berkman Klein Center at Harvard

Berkman Klein Center Collection

Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store