How Should We Regulate AI?

Machines + Society
5 min readJun 15, 2019

--

An analysis of the recent OpenAI proposal for a regulatory framework.

Written by Mako Shen and Bhavik Nagda.

Ai Weiwei: Untitled (Golden Handcuffs, Surveillance Cameras and Twitter Birds)

On May 14, 2019, the city of San Francisco banned local government agencies’ use of facial recognition technology through an order entitled “Stop Secret Surveillance”. Other cities, such as Somerville, MA, and Oakland, CA, are set to vote on similar laws. Local governments are trying to limit the potential damage of a nascent technology; this is mostly a good thing. While the order may seem restrictive, it’s difficult to deny that the conversation on regulation is an important one. The San Francisco government has effectively placed a hold on facial recognition until we can better assess its rules and best practices. But what is the best way to regulate new technology? How feasible is it for a new bill to be passed each time a piece of software threatens society?

A motivating example for how regulation lags behind technology is the healthcare industry. Many recent technological advances in healthcare are hindered by red tape and regulations. Although we have algorithms that outperform doctors, we have yet to see machines diagnosing humans. This is in large part because we don’t have a good regulatory framework to deal with the algorithm when it fails. Who is responsible when an algorithm diagnoses your headache as brain cancer: the doctor who acts on the order, the software developers, or the people who approved the algorithm? We, as a research community, must develop regulatory frameworks for AI and technology.

One solution, proposed by OpenAI’s Jack Clark and Gillian Hadfield, involves creating a new market altogether — the global regulatory market. In a paper titled Regulatory Markets for AI Safety (ICLR 2019), Clark and Hadfield introduce create a market of private regulators in the hopes of spurring innovations in regulation tech.

Here is how it works. The government sets a specific goal. Private regulators, which may be nonprofit or for-profit, create a product to meet the goal. The product may be a piece of software or a conceptual framework, and it produces metrics that the government can evaluate. This is where the market framework plays a role: the targets of regulation may then choose among the regulators for the cheapest or least invasive products. Ideally, the regulators compete and innovate, creating efficient solutions to the government’s goals. Here’s a diagram of the process from the original paper:

An Example: Facial Recognition

Let’s take a look at a specific example. Suppose that the United States government wants to regulate facial recognition technologies (to protect privacy, prevent bias, reduce misclassification, etc). We have three major actors here: (1) the facial recognition companies such as Microsoft, Google, Cognitec or IBM, (2) the regulators, and (3) the government.

Regulation begins with the government outlining its goals. To protect privacy, the federal government might require that companies ask for citizen consent before use of their technology. It may also place specific limits on the data collected from facial recognition cameras. To prevent bias, the government can establish benchmarks for, say, the recognition of minority faces on certain datasets. These policy goals would be outlined, documented, and then publicized to the global regulatory market.

In the next step, the regulatory market begins developing regulatory technologies aligned with the government’s policies. Say that two companies, TechReg Inc and OpenPolicies Inc, create these technologies. Both technologies achieve the government’s goals, but TechReg is poorly run, and imposes greater restrictions on its clients. When Google and IBM look for regulators, they will turn away from TechReg and instead gain better service and flexibility with OpenPolicies’s technology. This is a simple example, but it’s clear that the regulators will compete on the basis of their regulatory technology to serve the most and the largest companies.

The final step is government assessment. Once the regulatory technology is implemented, the governments must check that the regulators have done their job. The government could periodically send facial recognition requests to random companies to verify that they are meeting the bias requirements. It might ask an agency to perform audits on company databases or cameras to check that the privacy requirements are met. If government assessment raises alarm, the regulator servicing the company is held liable and potentially has its license revoked.

Analysis and Shortcomings

Clark and Hadfield’s model above only works if the regulators are both competitive and independent. Competitiveness — or the fact that no single party can have too large a market share — seems the most feasible. An oversight committee or a law can address this.

The second, more significant concern, is that of regulatory capture, or independence. OpenPolicies Inc cannot be colluding with Google to relax the regulations.

We witnessed the consequences of major regulatory capture during the 2008 financial crisis (this is closely detailed in Joe Stiglitz’s The Price of Inequality). Given the current size of technology companies, it seems reasonable to expect similar pressures for regulatory capture under this model. Stiglitz suggests that multiple oversight and a broad system of checks and balances helps prevent regulatory capture. Although this is an area that needs further research, it seems plausible that this is addressed by tasking an independent government agency with assessing the regulators.

One final note is that this framework changes how we implement goals for regulation, but not how we write the goals themselves. That is, this framework, whilst (potentially) solving innovation in implementing the goals, does nothing to spur innovation in the design of the goals. The government still controls the goals of the regulation, and isn’t incentivized to optimize the goals. This is a significant problem because there does not exist a clear consensus as to what we want the goals to be — what role we would like AI to play in society. We have loose ideas about AI empowering people and granting workers autonomy, but few of these ideas are easily translatable into the goals of the paper. How do we measure worker empowerment? What metrics best convey the societal benefits and drawbacks of artificial intelligence?

This is a highly exploratory paper and, as such, its contours are still not well defined. The framework may yield a promising direction forward, but there is still much to iron out before we can even begin to think about implementation. For now, though, perhaps we should stop deploying technologies that, among other things, confuse our members of congress with criminals.

--

--