Build first and ask questions later?

Three hidden vulnerabilities of artificial intelligence

by John Bowers

Is AI the new asbestos?

Berkman Klein Center Faculty Director Jonathan Zittrain kicked off a presentation with this question at November’s “Human Rights, Ethics, and Artificial Intelligence: Challenges for the next 70 Years of the Universal Declaration.” The conference was sponsored by the Carr Center for Human Rights Policy, the Edmond J. Safra Center for Ethics, and the Berkman Klein Center for Internet & Society at Harvard University.

Asbestos, a fibrous mineral with flame-retardant properties, was widely used in construction throughout much of the 20th-century. Over the latter part of the century, public concern and scientific evidence regarding a link between asbestos exposure and deadly diseases such as mesothelioma mounted. This shift has ultimately resulted in controls on the sale and use of asbestos, massive efforts to remove it from existing structures, and thousands of lawsuits with cumulative damages totaling tens of billions of dollars.

As we build AI into today’s projects without fully addressing (or even understanding) its fundamental problems and deficiencies, we may very well be setting ourselves up for another painful process of self-correction. As humanity increasingly comes to rely on AI, the dangers will grow ever more ubiquitous and expensive to address. Described below are three dangerous properties of AI Zittrain went on to identify.

“c” who cannot be named

The lingua franca of machine learning, the branch of AI responsible for much of its recent renaissance, is comprised of variables, constants, and the functions which interrelate them. Many types of machine learning algorithms work by iteratively tweaking variables and constants to optimize the value of a “loss function” — a metric designed to quantify performance. In many such systems, little or no actual human input on the system’s decision-making process is required — predictive power is the product of mathematical optimization informed by a dataset.

In some cases, this paradigm has had the effect of freeing those tasked with building models from having to state their own policy preferences or examine unconscious biases. Instead of designing models to reflect complicated human-interpretable processes of reasoning and decision making, machine learning engineers can often rely on their algorithms to generate statistical models which attain equal — or superior — performance at the cost of interpretability. What we’re often left with is the ability to make excellent predictions about the world without having to specify the process by which those decisions are reached.

But if we can’t break down the means by which our machine learning systems operate, who is to blame when they inflict harm? And what subtle harms might we miss in the first place? When the power of our models is premised on uninterpretable combinations of constants and variables — “‘c’ who cannot be named” — it’s very easy to implement destructive or discriminatory policies without accepting responsibility (or even knowing that we’ve done so). These risks were vastly less present in the “expert systems” of yesteryear, the design of which represented knowledge in a much more human-readable way.

Operationalizing values

But even in disciplines fundamentally centered around human-readable knowledge — such as law — achieving clarity and consistency in decision-making is often a challenge. Herein lies a second problem facing AI — that many of the things we value are not as mathematically formalizable as models demand them to be. So even when our models are interpretable, it’s still often difficult or impossible to formally encode the values that we believe to separate good processes and outcomes from bad ones. This is particularly the case when multiple values come into competition with one another within the context of a model.

Take, for example, fairness, which is often a central value in the creation of AI systems. While most people can readily offer up intuitive judgements as to whether a particular policy or situation is fair, very few would likely be able to formalize these judgements mathematically. And what formalizations do exist are numerous and contradictory — fairness can be defined in a vast number of different ways, with each giving rise to a different mathematical representation. An excellent example can be found in the controversy around ProPublica’s analysis of the COMPAS recidivism risk prediction algorithm.

A slide from Zittrain’s presentation summarizing clashing fairness claims in the COMPAS debate

Even the Supreme Court has had to grapple with the inherent challenges of defining fairness with some degree of formalism. In Vieth v. Jubelirer, a 2004 gerrymandering case, Justice Scalia argued that “‘Fairness’ does not seem to us a judicially manageable standard. Fairness is compatible with noncontiguous districts, it is compatible with districts that straddle political subdivisions, and it is compatible with a party’s not winning the number of seats that mirrors the proportion of its vote.” In the 1996 case BMW v. Gore — which involved setting punitive damages resulting from the misrepresentation of a product — Justice Stevens rejected the judicial use of formulas to determine the fairness of damages: “we have consistently rejected the notion that the constitutional line is marked by a simple mathematical formula, even one that compares actual and potential damages to the punitive award.”

Just as machines often have trouble explaining the reasoning behind a powerful model in human-readable terms, so too do humans have trouble expressing the model criteria that matter to us in machine-readable terms. This makes it exceptionally difficult to build safeguards against harmful behavior into increasingly ubiquitous and empowered AI systems.

Intellectual debt

The opacity and semantic limitations of many AI systems are contributing to a potentially dangerous accumulation of what Zittrain termed “intellectual debt.” AI — particularly machine learning — has enabled us to build enormously powerful models which can ably predict phenomena in the world without requiring us to understand the actual bases of those phenomena. For example, advertisers can often make excellent automated predictions as to which banner ad a given user would be most likely to click on, all without having to explicitly understand that user’s actual motivations in the slightest.

The notion of intellectual debt is inspired by “technical debt,” a concept in software engineering — including AI development! — whereby badly maintained code, hastily implemented features, and other imperfect development practices gradually introduce bugs, instability, and other forms of overhead into a system. This overhead (the “technical debt”) must be paid down by means such as code rewrites, debugging, and documentation. By extension, intellectual debt is accrued when systems are built around predictive mechanisms which deliver performance without requiring understanding. It can be paid down by identifying actual causal linkages which enable more transparent predictions.

An example of intellectual debt in the pharmaceutical context (unrelated to machine learning)

Intellectual debt is nothing new. Doctors, for instance, have long — and often quite successfully — prescribed medications without understanding the specific chemical pathways by which they operate. But the power and ubiquity of AI is enabling us to amass it with unprecedented speed. If we’re not careful, we might be left relying on critical systems which operationalize relationships and connections of which we have no actual understanding. It’s not that intellectual debt need be avoided in all cases — sometimes performance without understanding can be better than no performance at all, at least in the short term. But we need to be mindful of the gap between the ability to predict and deep understanding — and the specific risks which that gap might pose.

The issues highlighted above comprise just a few of the numerous challenges facing AI today. There’s much to be gained by forging ahead boldly, and an equal amount to be lost if we place sensitive decisions in the hands of autonomous systems that just aren’t ready to make them. If we’re not careful, the cleverest products of today’s research laboratories could very well be the targets of tomorrow’s litigation.

Learn more about the Berkman Klein Center’s work in the Ethics and Governance of Artificial Intelligence