Government’s AI Memo Overlooks Two Critical Issues

TEXAS Grand Challenges
Good Systems
Published in
5 min readMar 6, 2020

By Peter Stone, Ph.D.

Three artificial intelligence robots stand in a building at The University of Texas at Austin.

Peter Stone is a founding member of Good Systems, a research grand challenge at The University of Texas at Austin that launched in September, 2019. The growing team of researchers includes scholars from two dozen departments and units on the UT campus, all with one goal: designing AI technologies that benefit society. A condensed version of this post originally appeared as an op-ed in The Hill.

Artificial Intelligence has long been fodder for the imagination, with countless novels and movies portraying the possible effects on society. Most of these portrayals have been extreme: either overly optimistic (e.g., the Jetsons) or terrifyingly negative (e.g., Terminator). Extreme scenarios are all well and good within the context of fiction, but we’re now faced with the urgent need to consider reality — and the White House’s recent guiding principles for regulating AI are an important step towards confronting the current reality. They encourage private sector innovation while cautioning against a head-in-the-sand approach to technological development that could make dystopian fiction come to life.

However they largely overlook two critical risks. First, AI technologies have the potential to dramatically increase the gap between the “haves” and the “have-nots” in this country. And second, without explicit international coordination, we could end up with a morass of contradictory regulations that unintentionally stifle innovation.

AI policy in the US has lagged far behind that of other countries, but over the past several months, the situation has improved markedly with an update in 2019 to the 2016 strategic plan and especially the 2020 memorandum on “Guidance for Regulation of Artificial Intelligence Applications.”

One of the biggest risks of AI technologies is that they widen the gap between the “haves” and the “have-nots” in this country to the point that peaceful society becomes unsustainable.

This memorandum, which lays out 10 “Principles for the Stewardship of AI Applications,” hits (almost) all the right points. As described by Michael Kratsios, the Chief Technology Officer of the United States, it advocates a “light-touch regulatory approach,” calling for regulation only when existing statutes are insufficient for a specifically identified purpose. It also articulates nicely (and appropriately) that the risks of regulation, including — but not limited to — potentially stifling innovation, ought to be carefully balanced against possible benefits.

In particular, it specifically recognizes that there “is always likely to be at least some risk, including that associated with not knowing what is currently unknown.” Without minimizing the potential detrimental effects of AI technologies (and really any technologies), the memorandum rightly emphasizes that these risks may be counterbalanced by important benefits. Risks and benefits must be evaluated objectively, even if the benefits are harder to imagine or don’t make good story lines. For example, it’s much easier to imagine the jobs that a new technology will replace than the ones that it could create.

There are, however, two ways in which these principles fall short. First, there is insufficient attention to the economically divisive potential of AI technologies. Regulatory agencies ought to be cognizant of the danger that AI technologies could exacerbate societal inequalities. AI can make people more productive and efficient, but only those with access to these technologies (along with the computational resources and large-scale data that fuel them). For example, if intelligent tutoring systems are available only in English, non-English speakers will be at a large disadvantage.

Many AI applications are deployed globally, and inconsistencies across countries’ regulatory requirements can themselves cause barriers to innovation.

Agencies should also consider that an AI application could be deployed in a manner that yields anticompetitive effects that favors incumbents at the expense of new market entrants, competitors, or up-stream or down-stream business partners. In my opinion, one of the biggest risks of AI technologies is that they widen the gap between the “haves” and the “have-nots” in this country to the point that peaceful society becomes unsustainable. Thus, regulatory agencies must urgently consider the effects of their actions (or in-actions) specifically with respect to AI technologies on long-term distribution of wealth.

Second, while there is a brief section in the memorandum on “International Regulatory Cooperation,” the emphasis is on ensuring that “American companies are not disadvantaged by the United States’ regulatory regime.” It’s important to recognize that many AI applications are deployed globally, and inconsistencies across countries’ regulatory requirements can themselves cause barriers to innovation. Whenever high-level policy objectives are aligned across borders, agencies ought to be encouraged to do everything possible to align the details of those regulations as well.

Specifically, despite the lack of US regulations so far, US companies must comply with Europe’s General Data Protection Regulations (GDPR) if they are to be accessible in Europe. Any new US regulations ought to be constructed in such a way that they are easily compatible with policies from other countries — at least those with similar ideals. If we end up with an inconsistent morass of international regulations, complying with them will place a particular burden on small companies and stifle innovation, which is clearly counter to the objective of the memorandum.

Despite these two oversights, the memorandum has many redeeming qualities. It hits on other important themes such as promoting public trust in AI applications by emphasizing fairness, non-discrimination, disclosure, transparency, safety, and security. It’s written in a way that fully recognizes that: AI applications are rapidly changing, they affect different sectors differently, they fundamentally differ from other types of technologies, and they rely on access to data. I was particularly encouraged to see an acknowledgement of the potential value of non-regulatory steps, such as sponsoring pilot programs to build knowledge of AI applications among policy makers, as well as encouraging and participating in the creation of voluntary consensus standards. Such standards must be monitored for adherence and sufficiency of scope, but they’re an important part of the standards ecosystem that ought to be actively coordinated with government actions.

Overall, the recently released memorandum is a fantastic step forward with regards to providing appropriate guidance to governmental agencies and to codifying a national AI policy in the US. Especially if it will be updated to further emphasize the risks pertaining to increased societal inequality and international miscoordination, I will look forward to follow-up actions that are consistent with the memorandum’s recommendations.

A condensed version of this post originally appeared as an op-ed in The Hill.

Please join us on this journey.

Good Systems is a research grand challenge at The University of Texas at Austin. We’re a team of information and computer scientists, robotics experts, engineers, humanists and philosophers, policy and communication scholars, architects, and designers. Our goal over the next eight years is to design AI technologies that benefit society. Follow us on Twitter, join us at our events, and come back to our blog for updates.

Peter Stone, Ph.D., is the David Bruton Jr. Centennial Professor of Computer Science at The University of Texas at Austin. He is currently chair of the Standing Committee of the One Hundred Year Study on Artificial Intelligence (AI100) and chaired the first AI100 Study Panel, which in 2016 released its report on “AI and Life in 2030.” He is also an executive team member of the Good Systems grand challenge at UT. The opinions expressed here are his own and may not reflect those of any organizations with which he’s affiliated.

--

--