Legal Risks of Adversarial Machine Learning Research

Ram Shankar Siva Kumar
Berkman Klein Center Collection
3 min readJul 15, 2020

Adversarial machine learning (ML), the study of subverting ML systems, is moving at a rapid pace. Researchers have written more than 2,000 papers examining this phenomenon in the last 6 years. This research has real-world consequences. Researchers have used adversarial ML techniques to identify flaws in Facebook’s micro-targeting ad platform, expose vulnerabilities in Tesla’s self driving cars, replicate ML models hosted in Microsoft, Google and IBM, and evade anti-virus engines.

Studying or testing the security of any operational system potentially runs afoul of the Computer Fraud and Abuse Act (CFAA), the primary federal statute that creates liability for hacking. The broad scope of the CFAA has been heavily criticized, with security researchers among the most vocal. They argue the CFAA — with its rigid requirements and heavy penalties — has a chilling effect on security research. Adversarial ML security research is no different.

In a new paper, Jonathon Penney, Bruce Schneier, Kendra Albert, and I examine the potential legal risks to adversarial Machine Learning researchers when they attack ML systems and the implications of the upcoming U.S. Supreme Court case Van Buren v. United States for the adversarial ML field. This work was published at the Law and Machine Learning Workshop held at 2020 International Conference on Machine Learning (ICML).

In the paper, we consider two CFAA sections particularly relevant to adversarial machine learning.

  • First, intentionally accessing a computer “without authorization” or in a way that “exceeds authorized access” and as a result obtains “any information” on a “protected computer” (section 1030(a)(2)(C)). This landscape is particularly complex and confusing given the current circuit split.
CFAA Interpretation By Circuit Court Region
  • Second, intentionally causing “damage” to a “protected computer” without authorization by “knowingly” transmitting a “program, information, code, or command” (section 1030(a)(5)(A)).

Takeaways from the paper

Is the Adversarial ML Researcher violating the CFAA when attacking an ML system? Depending on the nature of the adversarial ML attack, and which US State the lawsuit is brought, the answer varies.

  1. Using the example of a ML service whose rules are based on Google API’s Terms of Service (TOS), we considered a range of adversarial ML attacks in light of the CFAA. We show how taking into account the circuit split on how section 1030(a)(2) should be interpreted, and 1030(a)(5)(A), whether the researcher is committing a violation or not.
Adversarial ML Legal Risks

If the Supreme Court follows the Ninth Circuit’s narrow construction when it decides Van Buren, it will lead to better security outcomes for adversarial ML research in the long term.

2. If ML security researchers and industry actors cannot rely on expansive TOSs to deter against certain forms of adversarial attacks, it will provide a powerful incentive to develop more robust technological and code-based protections. And with a more narrow construction of the CFAA, ML security researchers are more likely to be conducting tests and other exploratory work on ML systems, again leading to better security in the long term.

Link to full paper: https://arxiv.org/abs/2006.16179

--

--

Ram Shankar Siva Kumar
Berkman Klein Center Collection

Data Cowboy at Microsoft; Affiliate at Berkman Klein Center at Harvard