Fair, Reliable, and Safe: California Can Lead the Way on AI Policy to Ensure Benefits for All

CITRIS Policy Lab
CITRISPolicyLab
Published in
5 min readMay 28, 2019

Brandie Nonnecke, Founding Director, CITRIS Policy Lab, UC Berkeley

Jessica Cussins Newman, Research Fellow, Center for Long-Term Cybersecurity, UC Berkeley

California State Capitol. Photo Credit: Brandie Nonnecke

Amidst claims that artificial intelligence (AI) will add $15 trillion to the global economy by 2030, governments around the world are devoting significant resources to support national R&D of the technology. At the same time, concerns about misuse and unintended consequences have prompted efforts such as the recent OECD AI Principles, now supported by more than 40 countries, and the US National Institute of Standards and Technology (NIST) development of federal standards to support reliable, robust, and trustworthy AI systems. Such efforts at the international and national levels provide critical overarching guidelines, but arguably the vanguard of AI policy is taking place more locally.

In particular, California has recently positioned itself as a leader in responsible AI governance. This bodes well for the state, which is now the 5th largest economy in the world and home to many of the leading AI companies, but its influence is also likely to spread much further. We could see the realization of the so-called “California effect” — whereby laws are first adopted by the state, but then spread outwards, in part because unified regulatory environments can be beneficial for industry.

Although there are many AI bills that have been introduced at the federal level in the US, including the AI Jobs Act, the Algorithmic Accountability Act, the FUTURE of AI Act, and the AI in Government Act, none have passed either the House or the Senate. This is not atypical; fewer than 4% of bills introduced in Congress become law. In contrast, states often have more nimble regulatory processes and greater potential for tangible impacts.

In recognition of the growing importance and potential of California in this space, the UC Berkeley Center for Long-Term Cybersecurity, the CITRIS Policy Lab, and the Future of Life Institute organized a briefing on AI governance at the California State Capitol in early April. The briefing emphasized the importance of establishing agile governance processes that can scale in response to more complex AI systems; supporting accountability in government use of AI, including public documentation and independent auditing; and developing government procurement standards that will better ensure California residents reap the many benefits of these technologies while protecting them from unintended harms.

These recommendations are timely because government use of these technologies is no longer hypothetical. AI-enabled systems are becoming ever more pervasive in core government institutions, informing decisions within our courthouses, schools, and welfare agencies. While algorithms intend to increase efficiency and effectiveness, lack of rigorous standards and oversight can lead to error-prone results, biased outcomes, and serious privacy and security vulnerabilities.

For example, over a two-year period, an automated decision system utilized by the Michigan Unemployment Insurance Agency falsely flagged over 40,000 Michigan residents with unemployment insurance fraud, resulting in many of the wrongfully accused being forced to file for bankruptcy due to hefty fines and seizure of assets. A child abuse prediction algorithm employed in Pennsylvania was found to disproportionately flag low-income families regardless of actual risk posed to the children.

One of the most contentious government uses of AI is facial recognition technology, and California has recently taken a strong stance on the issue. On May 14, San Francisco became the first major U.S. city to ban the use of facial recognition technology by city agencies and law enforcement. Although facial recognition technologies are already being sold to law enforcement around the country to help identify suspects, the technologies have been found to misclassify people of color, especially women. Amid concern over accuracy, bias, and infringement of privacy, the San Francisco ban is one of many steps being taken in California to establish standards and oversight over government use of AI.

In 2018, California passed the California Consumer Privacy Act, which establishes greater user control over the use and sale of personal data and the Bot Disclosure Act, which makes it unlawful to use a bot to incentivize a commercial transaction or influence a vote in an election without disclosure. In September 2018, California passed a revolutionary resolution adopting the 23 Asilomar Principles as guiding values for the development of AI. The Principles are a collaborative effort initiated by AI researchers, legal scholars, and social scientists in Asilomar, California in January 2017. Signatories include more than 3,800 business leaders and AI experts such as Elon Musk, the late Stephen Hawking, and Stuart Russell.

Since adopting the Principles, at least nine additional AI-related bills have been introduced in the California Legislature, and are making their way through committees. Among them: AB-1281, a bill requiring businesses to publicly disclose use of facial recognition technology; SB-730, a bill to establish a commission on the future of work; SB-348, a bill to encourage the Governor to appoint an AI special advisor and develop a statewide AI strategic plan; AB-594, the California AI Act of 2020, which would develop a policy framework to manage the use of AI; and AB-976, a bill that would establish an AI in State Government Services Commission to gather input on how to use AI to improve state services.

An additional bill, AB-459, would require the AI in State Government Services Commission to oversee standards for government use of AI that would require accountability, prioritize safety and security, protect privacy, and monitor impacts. If it were to pass, one of the most important effects of AB-459 might be its influence on government procurement standards. Governments wield significant purchasing power, which often serves as a market-shaping force. Leveraging this power to shape AI development is a relatively low-cost way for governments to address market failures — such as AI safety — in emerging markets. If such specifications are required in order to receive a significant contract, developers are more likely to incorporate these features into their designs.

It seems increasingly clear that state governments will play an essential role in shaping the future of AI. Other US states are also taking important steps. Washington approved a Future of Work Task Force in March 2018 to navigate the impacts of automation on its workforce, and Vermont established an AI Task Force in May 2018 to make recommendations about government use of AI. Given its unique position as the home of so many leading AI companies and research labs, California has an opportunity and responsibility to lead the way in establishing effective standards and oversight that ensures AI systems are developed and deployed for the benefit of all. If the “California effect” comes into play, the impact of this work will likely be felt well beyond the state’s borders.

This article originally appeared in Berkeley Blog.

The CITRIS Policy Lab, headquartered at CITRIS and the Banatao Institute at UC Berkeley, supports interdisciplinary research, education, and thought leadership to address core questions regarding the role of formal and informal regulation in promoting innovation and amplifying its positive effects on society.

The Center for Long-Term Cybersecurity at UC Berkeley is developing and shaping cybersecurity research and practice based on a long-term vision of the internet and its future.

--

--

CITRIS Policy Lab
CITRISPolicyLab

The CITRIS Policy Lab supports interdisciplinary tech policy research and engagement in the interest of society. citrispolicylab.org