AI & the European Commission’s Risky Business

Ryan Budish
Berkman Klein Center Collection
7 min readJun 12, 2020
Source: Wikimedia

In early February 2020, the European Commission published a white paper seeking comments on the broad contours for AI regulations expected by the end of 2020. The white paper makes clear that the European Commission intends to place “risk” as the central fulcrum of their governance strategy, stating “[a] risk-based approach is important to help ensure that the regulatory intervention is proportionate.”

The Commission is not alone in basing its approach to AI governance on “risk” — as I’ll detail below, the US government, OECD, and IEEE, among many others, have also centered risk in their strategies. However, not one offers a clear definition of how to define and measure “risk.” This is dangerous because there is no single, consensus meaning. In fact, it is a concept with decades of complicated history and debate that has pitted quantitative measures of risk against qualitative ones. And this debate has been most fierce when trying to understand scientific fields and emerging technologies under conditions of significant uncertainty — conditions much like we find ourselves with respect to AI. It is critical that the European Commission not ignore this history in developing its regulatory framework for AI.

The Commission’s white paper creates two categories of risk: (1) low-risk applications that will not face any new restrictions beyond existing law; and (2) high-risk applications that will face new restrictions about training data, record keeping, transparency, accuracy, human oversight, and more. The challenge, however, is in determining exactly what “high risk” and “low risk” actually mean. To that end, the white paper offers some limited guidance in the form of two criteria. First, an application is high risk “where, given the characteristics of the activities typically undertaken, significant risks can be expected to occur.” And second, an application is high risk when it is “used in such a manner that significant risks are likely to arise.” What is apparent is that these two cumulative criterion do not actually define “high risk,” and instead circularly assert that an application is high risk if it is used in a space that is high risk and in a way that is high risk. Such a definition only defers and displaces the determination of risk.

A similar focus on risk is evident in other AI frameworks as well. In January 2020, the Office of Management and Budget (OMB) for the United States sought comment on a proposed regulatory framework for AI. At its core the framework stated that: “Regulatory and non-regulatory approaches to AI should be based on a consistent application of risk assessment and risk management across various agencies and various technologies…. a risk-based approach should be used to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits.” Similar to the proposed European approach, the OMB framework suggests a “tiered approach” in which “AI applications that pose lower risks” have fewer restrictions than “higher risk AI applications.” But just like the European framework, the OMB proposal does not provide guidance on how we assess those risks.

We also see risk invoked in the OECD’s AI principles (“AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.”) Dubai’s AI ethical standards simply state that “AI operator organisations should consider internal risk assessments or ethics frameworks as a means to facilitate the identification of risks and mitigating measures.” Similarly, the IEEE’s Ethically Aligned Design Principles invoke “risk management” as one of several components for effective regulation of AI. The Toronto Declaration uses the word “risk” 28 times over 16 pages, but does not go much further than urging governments and other organizations to identify and mitigate the human rights risks from AI. And the Personal Data Protection Commission of Singapore’s Proposed AI Governance Framework does an excellent job of identifying several of the ways in which the risk of AI may be difficult to measure — for instance noting that “[e]ven within a country, risks may vary significantly depending on where AI is deployed.” But Singapore’s framework offers nothing about how to address those complexities beyond using a “periodically reviewed risk impact assessment.” Most helpfully, Canada’s Directive on Automated Decision-Making offers a detailed four-level matrix of AI impacts designed “to help institutions better understand and reduce the risks associated with Automated Decision Systems.” But here, too, leaves much unsaid about the process of determining whether a system will have “moderate impacts… that are likely reversible and short-term” as opposed to “high-impacts… that can be difficult to reverse, and are ongoing.”

According to the OECD, risk governance at its most general involves assessing risk (“asking what could happen, and how serious it would be”) and managing that risk (“asking what should be done about it”), and that process is one that has been central to human survival for millennia. But for at least 100 years it has been formalized in regulations around food, drug, and workplace safety and environmental protection.

For years AI scholars have been discussing the “risks” of AI, primarily to raise awareness about the ways in which AI can go wrong and serve as a much-needed check on techno-optimism and techno-solutionism (see, for example, work on the problems with AI in predictive policing). This is important work, but distinct from what “risk assessment” means in a governance context. Assessing and measuring risks in a governance context requires more than just creating the laundry list of things that could go wrong; risk governance necessitates a comparative process by which decision makers assess the relative weights of all of those various outcomes so that they can identify the appropriate policy responses. Simply understanding and defining the universe of potential risks is necessary but not sufficient for a “risk-based” governance approach. It is equally important to have a clear process for considering those risks in relation to each other and in relation to other policy objectives. The European Commission’s white paper, along with many other AI governance frameworks, falls short in that regard. And therein lies the danger: in the absence of an explicit choice, AI governance will likely be pulled toward quantifiable measures of risk at the expense of equally important (yet more ambiguous and uncertain) qualitative measures of risk.

Risk governance, by its very nature, embodies a belief in order over chaos and science over faith. It is premised on the notion that we can take things that are uncertain and place them in the balance to make rational and calculated decisions. For that reason, risk governance has favored scientific certainty and rigor — the more precise we can be in our estimations of risk, costs, and benefits, the greater our accuracy can be in weighing those elements.

That gets at the core of at least 40 years of debate about risk governance: to what extent does risk governance elevate, emphasize, and prioritize those risks that can be quantified at the expense of measures of risk that are more qualitative, ambiguous, and uncertain?

AI, as many emerging technologies, resists efforts to quantify its risks. AI technologies are neither fixed in time nor place. First, AI technologies are rapidly evolving, meaning that its risks, costs, and benefits are quickly evolving. And second, AI is highly contextual. An AI technology, for example, that works well for one population or with one geography, may actually have significant negative impacts elsewhere. Thus, truly understanding AI’s risks requires a multi-dimensional approach that includes measures of risks that may not be easily distilled down to a number. And yet traditional approaches to risk governance are often poorly suited for handling multi-dimensional, qualitative, ambiguous, and uncertain risks.

For example, traditional approaches to risk governance often emphasize a risk assessment/risk management dichotomy. One example of this is from the seminal EC-Hormones case, a WTO trade dispute about a European ban on US beef raised with the use of certain hormones. In that case, the initial WTO panel emphasized that a risk assessment is a “a scientific examination of data and factual studies; it is not a policy exercise involving social value judgments made by political bodies,” whereas the risk management phase can take into account “non-scientific considerations, such as social value judgments.”

This artificial divide between the scientific and the social limits “risk” to those things that can be scientifically assessed and quantified. But in reality, risk cannot be separated so cleanly from the qualitative and the social. Some of the leading scholars on risk governance, Ortwin Renn, Andreas Klinke, and Marjolein van Asselt, wrote that there is a “convincing, theoretically demanding, and empirically sound basis to argue that many risks cannot be calculated on the basis of probability and effects alone, and that regulatory models which build on that assumption are not just inadequate, but constitute an obstacle to responsibly dealing with risk.”

In order for the European Commission to responsibly deal with the risk of AI, the Commission should learn from the debates that have taken place in a range of other scientific fields as they have grappled for decades with the role of scientific certainty and quantification of risk. Through that experience, risk governance experts and scholars have developed new frameworks that continue to value scientific data but alongside other more qualitative measures of risk. Unlike the traditional approaches to risk assessment/risk management and the precautionary principle, these more expansive risk governance frameworks embrace data and methodologies that are inherently messy, uncertain, and ambiguous. In particular, these risk governance frameworks have three important features: (1) they focus on broadening participation in the risk governance process, including a range of key stakeholders; (2) they value qualitative data and policy analysis; and (3) they use deliberative, multistakeholder processes. Collectively, these three features are particularly important when addressing the risks of new technologies, which often frustrate attempts to quantify the risks.

Before committing their strategy to the vague categories of “high-risk” and “low-risk”, the European Commission should consider the lessons learned from past risk governance debates and ensure that they are building a risk governance framework that embraces a holistic view of risk, including more qualitative measures.

This short post is based on a forthcoming article about the challenges of risk-based governance of AI and published as an informal contribution to the European Commission’s consultation process.

--

--

Ryan Budish
Berkman Klein Center Collection

Assistant Director for Research, Berkman Klein Center for Internet & Society, Harvard University