A Closer Look at AI policy

Our monthly analysis of policy and regulation around the globe

integrate.ai
the integrate.ai blog
5 min readJan 30, 2019

--

This post was originally sent as our monthly newsletter about trends in machine learning and artificial intelligence. If you’d like these analyses delivered directly to your inbox, subscribe here!

As with every previous major technical advancement, the broad deployment of machine learning has brought with it benefits as well as complications. Consider the rise of the automobile. It allowed for greater transportation efficiency and freedom, but it also dominated transportation planning for decades to the detriment of mass transit. Plus it led to highways being built through the centers of major cities. Technologies don’t come with manuals explaining how they can best be implemented and regulated. Even the word “best” here is problematic, open to debate, unlikely to ever reach complete consensus.

Artificial intelligence only further complicates the usual debates over technology. It’s gradually being integrated into just about every aspect of our lives, and in its future we envision a potential mirror of ourselves. Meanwhile, our ideas about how we want machine learning to function in society arguably remain in a far more primitive state than the machine learning field itself. Beyond of the din of continual research and engineering feats, questions of principles, policy, and regulation are becoming more and more necessary as a result.

Here we’ll take a brief look at the current state of AI policy, how governments and companies are envisioning the social and political dimensions of machine learning, and what major issues are likely to grow in importance over the next few years.

Regulation and the Geopolitics of AI

A timeline of AI reports and strategies published by different countries in 2016 and 2017.

Back in 2016, the Obama administration published two reports on the future of AI. One of the key findings of these reports was that regulation had the potential to either help or hinder AI research and development, depending on the cost of compliance vs. the potential benefits of greater transparency. Recently, the Trump administration indicated that it will be updating the strategy laid out in the previous administration’s reports.

Of course, quite a lot has happened in the last couple of years on the AI front. GDPR came into effect, leading to a recent €50 million fine for Google due to its data practices in France. Deepfakes became a legitimate disinformation threat, indicating the degree to which AI can now alter our visual reality online. Facial recognition gained much broader usage, bringing with it attendant racial and gender biases. Meanwhile, a recent report suggests that Americans are broadly concerned about future applications of AI and expect substantial technical leaps in the field within the next decade. In contrast to Europe, however, the US still has no definitive policies or regulations in place regarding AI.

A diagram from a recent study looking at US attitudes toward AI. The plot shows aggregate responses to how likely different issues are to affect a substantial portion of the US population.

While Canada was one of the first countries to announce a national AI strategy back in 2017, this strategy is largely focused on enhancing research and talent. More recently, we’ve been working with the Canadian CIO Strategy Council to help craft standardized policies for the ethical implementation of AI in Canada. Meanwhile, only a handful of countries have passed legislation or set down clear guidelines for regulating AI. Europe, of course, has GDPR, but thanks to its limited scope and focus on data, it only addresses a small segment of the broader issues facing the field. Arguably the most comprehensive policy, at least from a long-term planning point of view, belongs to China. Though China’s original development plan primarily set down goals for research, education, and talent development, it also included nods to ethics, regulation, and safety. Interestingly, increased regulation may accelerate corporate adoption of AI, not hinder it. The PwC Global CEO survey, released days ago in Davos, shows that 25 percent of Chinese CEOs have implemented AI broadly across their organizations, compared to just 5 percent of US CEOs.

Envisioning a more global approach, a paper published last year recommends forming an international agency to regulate AI. Just this month, policymakers from the Organization for Economic Cooperation and Development (representing 36 nations) convened at MIT to begin the process of determining general recommendations for future governance and regulation.

Emergent Principles

While national AI strategies remain at varying degrees of development, it’s increasingly becoming de rigueur for companies to lay out their AI principles. Google, Microsoft, SAP, and Unity have all done so recently. Common across all these principles is an emphasis on fairness and privacy. Interestingly, Google just released an update to their previous principles, in which they indicate that they’ve now implemented a formal review process for new projects that includes attempts to forecast the best and worst potential outcomes should the project become an actual product.

Indeed a recent paper surveying the ways in which AI governance can affect more ethical decision-making proposes that, “the AI research community largely agrees that generalized frameworks are preferred over ad-hoc rules.” Having systematic frameworks in place before pursuing R&D of course makes enormous sense, but another recent paper also argues that models themselves should have a similar logic baked into them. In effect, the paper argues that models should be fundamentally interpretable from the start rather than being explained after the fact (see a previous newsletter for a discussion of ante-hoc vs. post-hoc explainability techniques).

AI Policy’s Murky Crystal Ball

There’s a lot of mystery surrounding the future of AI, but arguably nearly as much mystery hangs around the future policies that will guide it. AI policy across the globe still remains largely inchoate, and the geopolitical issues involved further complicate the picture. Just this past November, the US Commerce Department indicated that it’s considering altering the export rules for a number of technologies, including AI (a move that would likely have strong consequences for companies within the US).

A chart from the AI Index 2018 Annual Report.

Beyond practical concerns, AI policy is also in some sense tasked with addressing the broader existential concerns of the public. It’s very easy to misinterpret the real dangers of AI while minor developments become overhyped. And yet the field’s growth rate is unlikely to abate anytime soon, public anxieties notwithstanding. As a recent report recommends, one of the major ways regulation can become more effective is simply by ceasing to treat AI as a monolithic entity, instead coming up with policies directed at domain-specific applications. Such a sector-focused approach might also help to separate fact from fiction by directing more attention to the immediate threats of machine learning rather than those that currently remain only hypothetical.

--

--

integrate.ai
the integrate.ai blog

We're creating easy ways for developers and data teams to build distributed private networks to harness collective intelligence without moving data.