Testimony in Support of H.2701 and S.1876, Establishing a Commission in MA re: Use of AI in State Government

Christopher Bavitz
Berkman Klein Center Collection
12 min readOct 15, 2019

The following testimony was shared with the Joint Committee on State Administration and Regulatory Oversight of the Massachusetts Legislature on October 1, 2019, in support of proposed state House and state Senate bills that would establish a commission to consider use of technical tools that incorporate AI, algorithms, and machine learning throughout state government in the Commonwealth.

Testimony in Support of H.2701 and S.1876
October 1, 2019

Oral Testimony Introduction

Good afternoon. My name is Christopher Bavitz. I am on the faculty at Harvard Law School and am one of the faculty co-directors of Harvard’s Berkman Klein Center for Internet & Society, where my colleagues and I have been involved over the past two years with a significant body of research relating to the ethics and governance of artificial intelligence, machine learning, and related technologies. I am here to testify in support of H.2701 and S.1876, which would establish a Commission to examine the use of automated decisionmaking and AI systems in government in Massachusetts, with an eye toward transparency and fairness. The written testimony I am submitting today is co-signed by a number of researchers and scholars. Those joining the written testimony do so as individuals; titles and affiliations are for identification purposes only. My oral testimony today is intended to briefly summarize aspects of the written testimony.

Written Testimony in Support of H.2701 and S.1876
October 1, 2019

Submitted on Behalf of:¹

Kendra Albert
Lecturer on Law, Harvard Law School
Clinical Instructor, Cyberlaw Clinic, Berkman Klein Center for Internet & Society

Amar Ashar
Assistant Research Director, Berkman Klein Center for Internet & Society

Christopher T. Bavitz
WilmerHale Clinical Professor of Law, Harvard Law School
Faculty Co-Director, Berkman Klein Center for Internet & Society
Managing Director, Cyberlaw Clinic

Ryan Budish
Assistant Research Director, Berkman Klein Center for Internet & Society
Jessica Fjeld

Lecturer on Law, Harvard Law School
Assistant Director, Cyberlaw Clinic, Berkman Klein Center for Internet & Society

Urs Gasser
Professor of Practice, Harvard Law School
Executive Director, Berkman Klein Center for Internet & Society

Adam Holland
Project Manager, Berkman Klein Center for Internet & Society

Mason Kortz
Clinical Instructor, Cyberlaw Clinic, Berkman Klein Center for Internet & Society

Adam Nagy
Project Coordinator, Berkman Klein Center for Internet & Society

Sarah Newman
Senior Researcher, Berkman Klein Center for Internet & Society

David O’Brien
Senior Researcher and Assistant Research Director for Privacy and Security,
Berkman Klein Center for Internet & Society

Hilary Ross
Project Manager, Berkman Klein Center for Internet & Society

Carolyn Schmitt
Communications Associate, Berkman Klein Center for Internet & Society

Bruce Schneier
Fellow, Berkman Klein Center for Internet & Society

Overview

This testimony supports passage by the Massachusetts Legislature of H.2701 and S.1876, “An Act Establishing a Commission on Automated Decision-Making, Artificial Intelligence, Transparency, Fairness, and Individual Rights” and “An Act Establishing a Commission on Transparency and Use of Artificial Intelligence in Government Decision-Making,” respectively.² The bills are an important — indeed, necessary — step toward ensuring the due process and related rights of citizens of the Commonwealth are adequately protected as more and more technologies that incorporate artificial intelligence (“AI”), algorithms, and machine learning come to inform or serve as the basis for government decisions in Massachusetts.

This initial step of creating a Commission and tasking it with conducting a survey of the use of these tools throughout state government is vital as we seek to:

a. determine whether or not these tools may be used by state actors in various circumstances;

b. in cases where such use is permitted, establish standards to govern the procurement, implementation, and long-term testing and evaluation of these tools; and

c. ensure those impacted by algorithmic decisions understand those decisions and have adequate means to interrogate or challenge them.

Artificial Intelligence, Machine Learning, and Algorithmic Decisionmaking

The language of H.2701 and S.1876 is broad and captures an array of applications that incorporate algorithmic decisionmaking, machine learning, artificial intelligence, and other similar technologies. Much to the chagrin of many in computer science, it is not uncommon to see these terms conflated.³ For purposes of the Commission envisioned in H.2701 and S.1876, however, many of these technologies share common characteristics that support treating them similarly. They often operate in ways that are opaque to outside observers (and sometimes even to their users and developers); their functions may be difficult to interpret, explain, or evaluate; they are technically complex; and they focus on rigorous applications of rules and identification of correlations among individual data points and patterns that emerge when one considers those points together.

Potential Benefits of These Technologies

The technologies and applications addressed by H.2701 and S.1876 are not inherently bad. The ability to use computers to process data and identify connections at a scale beyond the capacity of the human mind may open up extraordinary possibilities in fields from transportation and mobility⁴ to environmental protection⁵ to medical diagnosis.⁶ There is no question that state and federal regulators considering these technologies should seek ways to encourage their development in appropriate circumstances while ensuring such development proceeds in a manner that is rights-respecting and consistent with democratic norms of transparency and accountability.

Concerns

That said, researchers and journalists have chronicled numerous examples of concerns raised by the development and deployment of these technologies and shown that a broad range of harmful consequences could come to pass when values embedded in a particular technological system are not aligned with the values of the societies they serve. The use of algorithmic tools in credit scoring creates the potential for discrimination.⁷ A system designed to filter resumes submitted by job applicants — trained on information about employees who have been successful in their work with the employer — may reinforce systemic inequalities in the current workforce.⁸ And, any use of artificial intelligence tools raises the specter of counter-efforts via adversarial attacks; a growing body of security research around such attacks shows that malicious actors may subvert the ways AI and related systems observe patterns and apply rules, with the intention of causing harm.⁹

These kinds of concerns are especially acute when AI, algorithms, and machine learning tools are used not by private actors but by government institutions. Citizens lack choices regarding their interactions with divisions of government empowered to provide services and confer benefits. The use by government actors of technical systems that perpetuate bias and discrimination, or that fail to offer adequate explanations, implicates constitutional and human rights.

Evidence of bias and other flaws in such systems abounds — from a system used to assess risk in the context of the criminal justice system,¹⁰ to a system used to determine eligibility for Medicaid and other government benefits,¹¹ to a system used to identify instances of unemployment insurance fraud.¹² Efforts to address the potential adverse impacts of these tools have taken a variety of forms — from proposals to ban certain technologies outright,¹³ to the promulgation of principles or standards to which technology developers should aspire,¹⁴ to civil litigation geared toward vindicating individuals’ rights.¹⁵

H.2701 and S.1876

The actual and potential problems with technologies that incorporate algorithms, machine learning, and AI strike us as precisely the kinds of problems that H.2701 and S.1876 seek to address. Of particular note:

  • Work done at the Berkman Klein Center to gather information about risk assessment tools used in the criminal justice system throughout the country¹⁶ has highlighted just how difficult it is to know when and where such tools are being used and the standards applied when they are deployed. The information-gathering and survey functions of H.2701 and S.1876 are key steps toward:

a. ensuring members of the public understand when algorithmic and other technical tools impact them in their interactions with government actors;

b. empowering citizens to be involved in government decisions about the procurement and use of such tools; and

c. ensuring that researchers and advocates for civil liberties and citizens’ rights can access such tools and bring their expertise to bear in evaluating them.

  • The focus, in Section 11(v) of the proposed bills, on “matters related to the transparency, explicability, auditability, and accountability of automated decision systems” reflects the importance of understanding the interplay between technical and legal notions of “interpretability” and “explainability” of decisions aided by automated tools.¹⁷
  • As noted above, the range of responses to the uses of these technologies includes everything from outright bans to the development of standards or rules to govern their use. In cases where the latter approach is deemed appropriate, the bills authorize the Commission to collect the types of data we would expect to inform decisions about the development of such standards or rules.¹⁸

H.2701 and S.1876: Further Considerations

One might critique the bills for their breadth along a couple of dimensions. First, it aims to address a wide variety of technologies. Second, it aims to address uses of these technologies across sectors — seemingly considering risk assessment tools in the criminal justice system alongside applications that evaluate eligibility for government benefits. It is true that different types of technologies used in different substantive sectors might warrant different approaches to risk avoidance or risk mitigation.¹⁹ But, this breadth seems to us to be a feature and not a bug at the early, investigative stage of work envisioned to be within the Commission’s mandate.

One might also critique the bills for the way in which they cabin the composition of the Commission, focusing on two named academic institutions and a handful of named non-governmental organizations in addition to representatives from the government of the Commonwealth. It is clear to us that the most vexing problems associated with the deployment of technologies that incorporate AI, algorithms, and machine learning by government actors demand solutions that:

a. draw on a broad range of experts from a wide variety of technical, legal, and other disciplines; and

b. include extensive input from members of the communities impacted by such tools and applications.²⁰

Even assuming that the Commission’s membership should be local, relevant academic expertise exists at institutions and in disciplines beyond those described in the bills. And, although the bills go to great lengths to identify non-governmental organizations and other institutions that may offer relevant and necessary community representation, we encourage the Legislature to incorporate as many diverse voices and viewpoints as practically possible when dictating the makeup of the Commission (and to encourage each participating member to engage with its own community in the spirit of inclusion and belonging).

Finally, whether this should be addressed in the legislation creating the Commission or by the Commission itself as part of its information-gathering process, it is important that consideration is given to the fact that:

a. the kinds of decisionmaking tools at issue are often developed by private vendors and sold or licensed to government actors;

b. relevant information regarding the design or operation of these tools may be in the hands of those private developers; and

c. the terms of procurement contracts or other legal barriers may inhibit the Commission’s ability to gather information from those private developers and its mandate to report to the public.

We know that developers have, in some cases, expressed resistance to disclosure of information about their products and services in reliance on claims of trade secret or intellectual property protection.²¹ Section 11(xi) of the bills reflects an understanding of this important issue, empowering the Commission to examine “matters related to automated decision systems and intellectual property, such as the existence of non-disclosure agreements, trade secrets claims, and other proprietary interests, and the impacts of intellectual property considerations on transparency, explicability, auditability, accountability, and due process.” But, the Commission will need to recognize that these kinds of considerations are not merely subjects of the Commission’s work but may impact (or impede) the Commission in carrying out its mandate in the first place.

We urge the Legislature to give the Commission as much authority as possible to respond forcefully to these kinds of arguments from private companies. And, in cases where companies are successful in shielding needed documentation from scrutiny, we urge the Legislature to empower the Commission to document and report on such instances in an effort to inform future legislative efforts.

Conclusion

In conclusion, we applaud the Legislature for its consideration of these important bills. We support their passage and are enthusiastic about supporting the work of this Commission if and when H.2701 and S.1876 become law.

¹ Those joining this written testimony do so in their individual capacities; titles and affiliations are for identification purposes only.

² Bill H.2701, “An Act Establishing a Commission on Automated Decision-Making, Artificial Intelligence, Transparency, Fairness, and Individual Rights,” Mass. Leg. 191st Gen. Court, available at https://malegislature.gov/Bills/191/H2701; Bill S.1876, “An Act Establishing a Commission on Transparency and Use of Artificial Intelligence in Government Decision-Making,” Mass. Leg. 191st Gen. Court, available at https://malegislature.gov/Bills/191/s1876.

³ For the sake of precision, it may be worth noting that an algorithm is “a set of precise (i.e., unambiguous) rules that specify how to solve some problem or perform some task.” The Linux Information Project, “Algorithms: A Very Brief Introduction,” available at http://www.linfo.org/algorithm.html. Machine learning “extracts patterns from unlabeled data (unsupervised learning) or efficiently categorizes data according to pre-existing definitions embodied in a labeled data set (supervised learning).” Matt Chessen, “What is Artificial Intelligence? Definitions for policy-makers and non-technical enthusiasts,” Medium (April 3, 2017), available at https://medium.com/artificial-intelligence-policy-laws-and-ethics/what-is-artificial-intelligence-definitions-for-policy-makers-and-laymen-826fd3e9da3b (“What you really need to know is that machine learning allows computers to learn without being explicitly programmed.”). And, artificial intelligence encompasses “a rich set of subdisciplines, methods, and tools that bring together areas such as speech recognition, computer vision, machine translation, reasoning, attention and memory, robotics and control, etc.,” all characterized by a “degree of autonomy of such systems that impact human behavior and evolve dynamically in ways that are at times even surprising to their developers.” Urs Gasser, “AI and the Law: Setting the Stage,” Medium (June 26, 2017), available at https://medium.com/berkman-klein-center/ai-and-the-law-setting-the-stage-48516fda1b11.

⁴ Ashley Halsey III, “Driverless cars promise far greater mobility for the elderly and people with disabilities,” The Washington Post (November 23, 2017), available at https://www.washingtonpost.com/local/trafficandcommuting/driverless-cars-promise-far-greater-mobility-for-the-elderly-and-people-with-disabilities/2017/11/23/6994469c-c4a3-11e7-84bc-5e285c7f4512_story.html.

⁵ Devin Coldewey, “Rainforest Connection enlists machine learning to listen for loggers and jaguars in the Amazon,” Techcrunch (March 23, 2018), available at https://techcrunch.com/2018/03/23/rainforest-connection-enlists-machine-learning-to-listen-for-loggers-and-jaguars-in-the-amazon/.

See Nicola Davis, “AI equal with human experts in medical diagnosis, study finds,” The Guardian (September 24, 2019), available at https://www.theguardian.com/technology/2019/sep/24/ai-equal-with-human-experts-in-medical-diagnosis-study-finds.

⁷ Kaveh Waddell, “How Algorithms Can Bring Down Minorities’ Credit Scores,” The Atlantic (December 2, 2016), available at https://www.theatlantic.com/technology/archive/2016/12/how-algorithms-can-bring-down-minorities-credit-scores/509333/.

⁸ Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters (October 9, 2018), available at https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

⁹ Samuel G. Finlayson, John D. Bowers, Joichi Ito, Jonathan L. Zittrain, Andrew L. Beam, Isaac S. Kohane, “Adversarial attacks on medical machine learning,” Science Policy Forum (March 22, 2019), available at https://science.sciencemag.org/content/363/6433/1287. Adversarial attacks on AI systems can be very difficult to anticipate and use such fundamentally new and different techniques (compared with other forms of cyberattacks) that existing bodies of state and federal law may be ill-equipped to address them. See Ram Shankar, Siva Kumar, David R. O’Brien, Kendra Albert, and Salome Viljoen, “Law and Adversarial Machine Learning,” arXiv (October 25, 2018), available at https://arxiv.org/abs/1810.10731.

¹⁰ Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,” Pro Publica (May 23, 2016), available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

¹¹ Jay Stanley, “Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case,” ACLU (June 2, 2017), available at https://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case.

¹² Emily Lawler, “Michigan Supreme Court ruling lets lawsuit against state in unemployment fraud scandal move forward,” MLive.com (April 5, 2019), available at https://www.mlive.com/news/2019/04/michigan-supreme-court-ruling-lets-lawsuit-against-state-in-unemployment-fraud-scandal-move-forward.html.

¹³ Kate Conger, Richard Fausset, and Serge F. Kovaleski, “San Francisco Bans Facial Recognition Technology,” The New York Times (May 14, 2019), available at https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html.

¹⁴ Jessica Fjeld, Hannah Hilligoss, Nele Achten, Maia Levy Daniel, Sally Kagay, and Joshua Feldman, “Principled Artificial Intelligence: A Map of Ethical and Rights-Based Approaches,” Berkman Klein Center for Internet & Society (June 2019), available at https://ai-hr.cyber.harvard.edu/images/primp-viz.pdf.

¹⁵ See generally, Rashida Richardson, Jason M. Schultz, and Vincent M. Southerland, “Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems, AI Now (September 2019), available at https://ainowinstitute.org/litigatingalgorithms-2019-us.pdf.

¹⁶ Christopher Bavitz and Kira Hessekiel, “Algorithms and Justice: Examining the Role of the State in the Development and Deployment of Algorithmic Technologies,” Berkman Klein Center for Internet & Society (July 11, 2018), available at https://cyber.harvard.edu/story/2018-07/algorithms-and-justice.

¹⁷ Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, David O’Brien, Stuart Schieber, James Waldo, David Weinberger, and Alexandra Wood, “Accountability of AI Under the Law: The Role of Explanation,” arXiv (last revised November 21, 2017), available at https://arxiv.org/abs/1711.01134.

¹⁸ Chelsea Barabas, Christopher Bavitz, Ryan Budish, Karthik Dinakar, Cynthia Dwork, Urs Gasser, Kira Hessekiel, Joichi Ito, Ronald L. Rivest, Madars Virza, and Jonathan Zittrain, “An Open Letter to the Members of the Massachusetts Legislature Regarding the Adoption of Actuarial Risk Assessment Tools in the Criminal Justice System” (November 9, 2017), available at https://medium.com/berkman-klein-center/the-following-letter-signed-by-harvard-and-mit-based-faculty-staff-and-researchers-chelsea-7a0cf3e925e9.

¹⁹ Gretchen Greene, “Potholes, Rats and Criminals: A Framework for AI Ethical Risk,” HKS Ash Center Data-Smart City Solutions (April 19, 2018), available at https://datasmart.ash.harvard.edu/news/article/potholes-rats-and-criminals.

²⁰ See generally, Jenna Sherman, “Embracing AI for the Social Good,” Medium (December 14, 2018), available at https://medium.com/berkman-klein-center/embracing-ai-for-the-social-good-a83521ddae76.

²¹ See Rebecca Wexler, “Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System,” 70 Stanford Law Review 1343 (2018), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2920883.

--

--

Christopher Bavitz
Berkman Klein Center Collection

WilmerHale Clinical Professor of Law, Harvard Law School; Managing Director, Cyberlaw Clinic; Faculty Co-Director, Berkman Klein Center for Internet & Society.