The International Project of AI Governance

Terah Lyons
Read, Write, Participate
4 min readOct 24, 2017

In the Spring of 2016, the Obama Administration announced the White House Future of Artificial Intelligence Initiative — a series of public engagement activities, policy analyses, and expert convenings led by the Office of Science and Technology Policy to explore the impacts of artificial intelligence, or AI, a field that is broadly concerned with machines that exhibit behavior commonly thought of as intelligent.

The Future of AI Initiative marked the first significant, public-facing work that the U.S. government did to comprehensively investigate the societal and policy implications of machine intelligence and AI-driven automation. The work of the Obama White House focused on identifying and prioritizing issues for consideration with regard to AI in law and governance; technology, safety, and control; the economic and social implications of AI advancement; and the positive applications and societal benefits of AI technologies.

Our teams published three substantial public policy reports on AI in 2016 — on the future of AI and policy, on Federal research and development priorities and strategy, and on the impacts of AI-driven automation on the economy — and launched a national conversation about the current and future impacts of machine intelligence.

That AI will have dramatic impacts is now considered inevitable; it is still largely unclear, however, how changes resulting from this technological revolution will manifest. There is uncertainty over whether the impacts of AI in certain instances will be positive or negative, or in what ways badly implemented AI or AI trained on poor data can result in discriminatory impacts for different groups of people — including in how it may bring harm to already marginalized populations.

AI systems are widely deployed across most sectors, including in healthcare, transportation, financial services, business, education, and public safety, where algorithms are making decisions about everything from medical diagnoses to the determination of auto insurance premiums and in criminal risk scoring. Our capability of measuring the consequences of these technologies, and the quality of the data that we feed them, is still immature; more work is also required to improve these systems to best address possible unintended consequences and to minimize ancillary harm.

The next five to 10 years represent a critical juncture in technical and policy advancement, and in technology governance in the AI field. The decisions that government and the technical community make that steer the development and deployment of machine intelligence in one direction or another will have distinct effects on how AI technology is created, and for whom it will bring benefits or challenges when widely disseminated.

When I left government service at the end of March 2017, it was clear that our work on untangling these questions — in the research community, in government, and across industry — had only just begun. One meaningful determination we made as policymakers in the Obama Administration was that we needed more AI, not less of it — accompanied by increased consideration for the challenges AI brings, and especially focused on areas where AI can make important contributions to productivity growth and to societal welfare.

Tantamount to our investment in AI technology is the way in which we go about advancing it: AI development must consider and proactively address the risks and ethical dimensions hand-in-hand, including in matters of fairness, justice, competition, safety, security, and transparency.

Our legal and policy systems must also be equipped with new ways of handling the unique concerns of rapidly paced technological development brought by AI and related technologies. And the workforce and expert community supporting and overseeing this growth must be diverse, inclusive, and multidisciplinary — it will take technologists of all stripes alongside experts in other disciplines in the social sciences and humanities to ensure widely-shared prosperity brought by safe, inclusive, human-driven AI.

Mozilla is exploring these issues, and joining Mozilla as a Tech Policy Fellow has allowed me to continue the work that we started on AI governance in the Obama Administration on a global scale. The tasks of monitoring technical and policy progress and conducting gaps analyses which identify areas for action or matters that merit special consideration, coordination, or concern, are global projects that will require a cross-sector, cross-border effort over the coming years.

Last week, the Partnership on AI to Benefit People and Society announced that I will be joining as the inaugural Executive Director of the organization. The Partnership is working to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for debate about AI and its influence on people and society around the world. Much of this work will build upon issues that I have had an opportunity to work on and reflect upon in the course of being a Mozilla Fellow. I’ll be transitioning out of my role as a Tech Policy Fellow at the end of this month, after co-facilitating a session at MozFest in London on AI governance, which will further explore some of the above issues in depth alongside international colleagues from an array of disciplines.

Global engagement around important questions raised by AI — including on issues related to ethics, accountability, the future of work, and safety and control, as examples — will necessitate continued international collaboration and coordination across the public, private, civil society, and academic stakeholder communities as AI continues to embed itself in the fabric of our global society.

Now more than ever, it is important to have thoughtful, collective consideration of issues related to governance as an integral aspect of responsible technological development, so as to ensure that the future we build is one in which we can all participate and prosper.

--

--