The US Artificial Intelligence Initiative
How do you regulate something smarter than you?
Having recently watched the livestream of the inaugural meeting of the National Artificial Intelligence Advisory Committee (NAIAC), I wanted to quickly summarize a few highlights from the discussions. This essay is partly intended as a contribution to the AI research community, as since one of the most circulated newsletters (Import AI) has a writer that is participating as a member I figured there might be a gap of coverage in traditional channels. Yeah so the agenda was basically to formalize an advisory counsel to the president organized by the National Institute of Standards and Technology (NIST). The meeting served as a kickoff, announcing the key working groups and their membership which consist of volunteers from various segments of industry and academia. Briefly, those five working groups have a high level focus as follows, shown here with the name of their respective chair.
- Promoting Leadership in Trustworthy AI — Victoria Espinel
- Promoting Leadership in Research & Development — Ayanna Howard and Ashley Llorens
- Supporting the Workforce with Opportunities — Trooper Sanders
- US Leadership and Competitiveness — Yll Bajraktari
- International Cooperation — Zoë Baird
NAIAC Committee Chair: Miriam Vogel
Coupled with the announcements the members were invited to introduce themselves, share input inspired by submitted comments and questions, or otherwise just offer what drove them to volunteer and what they hoped the committee can accomplish. The following highlights merely represent these comments consolidated into the form of prose.
The United States in some aspects is well ahead of other countries in the field of artificial intelligence. Although our research community doesn’t lead in the number of publications, we do in the number of inbound citations — a much better measure of importance. Similarly, although our industry does not lead the world in the number of patent filings, we do lead in the number of patents granted. The hope is that by establishing this committee of experts, these domain insiders can propose means that would help us continue to lead the world in development and use of artificial intelligence systems, to do so in both the public and private sectors, and in a manner aligned with the values of our democracy.
The committee will need to weigh the interests of a wide band of potential impact, including considerations of policy towards both citizens and industry. Part of the challenge will be to create recommendations that are specific in nature and not just paying lip service to talking points. Where are the specific lines we do not want to cross? Where would we be better off deferring to an inevitable march of progress?
The members of this committee are privileged in that they have insights into where things are heading and what might be in store. A good portion of the workforce has no such foresight. How do we educate them so that they can determine how to navigate a career? AI is likely to disrupt most segments of the economy in some fashion, displacing workers in high wages and low. We need to prepare the workforce for what is coming. Through the fallout we need to retain principles of equity and inclusion, not just across racial and gender lines but across geographic lines too. Quality of life matters.
There is some hope to be found in what is taking place in mainstream academia. The number of students interested in AI and computer science is up exponentially from the attention devoted just twenty years ago. And these students are interested not just in applications, but in ethics too. However a big hurdle leading to imbalanced student outcomes is that the scope of attempted research is often contingent on access to large and growing requirements for computational power. We need to expand access to the resources that fuel AI. Actively participating in research plays a big role in establishing student competence, so lowering the bar to participate could have the effect of increasing the workforce at our disposal in the decades to come. Access to GPU clusters and supercomputers has so far primarily been limited to the largest corporations and most well-funded universities. One relevant initiative underway is being led by the National Science Foundation to develop accessible computational capacity that can be used by students across universities to allow them to contribute, which hopefully may be one of many such channels that can be promoted by this committee.
There is a certain amount of urgency in the questions facing the committee. Increasingly powerful systems are continuously being rolled out at an accelerating pace. AI is actively being spread at scale and deployed in a lot of different ways. The risk for the public sector is that they fall behind the ability to participate in this environment. Consider that of graduating PhD’s, a vastly disproportionate majority are going into industry or otherwise remaining in academia. Very few researchers go into public service. It is unrealistic to expect the government to provide leadership if they simply don’t understand AI.
The policy outcomes of this committee won’t be created in a vacuum, they take place in the context of global attention to the domain. Many countries have already put considerable effort into creating policy for this domain, including those initiatives to promote research by China or to develop a regulatory framework in Europe. Ideally any policy measures proposed by this committee could be aligned not just with our values, but harmonized with those initiatives of our allies and economic partners.
These questions need not be answered in isolation, in fact in several cases a successful outcome for one focus will benefit our competitiveness in another. If we are known as the country that can be counted on for trustworthy and bias free implementations, then surely our success exporting will benefit as well. AI is not a technology in isolation, what matters is the intersections of AI and human systems. Human outcomes.
In order to align we need a vision of how we can drive progress while adhering to our collective values as a society. Only after crystalizing this vision should we consider what to regulate.
I will close by noting some comments that I submitted to the committee prior to viewing the meeting, which was broadly intended as guidelines for establishing regulations in the arena of artificial intelligence. I hope these can be considered as high level principles into how if at all AI systems should be subject to regulatory interventions. They were merely intended as a starting point.
- Regulate the applications not the AI.
- Require opt-out clauses in domains lacking consumer choice.
- Prominent disclosure of artificial agent in obscured interactions is a must (e.g. chat bots, deep fakes).
- Safety is the highest priority for intervention.
- Given how fast everything is moving, in most cases regulations should be framed as temporary interventions with a built in obsolescence timing if not renewed.