Artificial intelligence will force us to confront our values

“Algorithmic accountability” is critical, but it presumes we know exactly what we’re trying to protect.

Miranda Bogen
Equal Future
7 min readSep 22, 2016

--

Last fall, I found myself sitting with an eclectic group of librarians, computer scientists, security specialists, and futurists, talking about the end of the world.

We jumped from topic to topic each week, debating which futuristic technology would be the one to finally destroy humanity: Nuclear weapons? Biotech? Alien invasion? Artificial intelligence? Every week, we voted.

Most of us were skeptical that a runaway, autonomous AI would be our doom, relatively confident that researchers will think to include reasonable controls on the systems they are building. But the question of AI’s growing role and whether it poses a threat to humans stayed with me: Machine learning and algorithmic decision-making, both of which fall under the umbrella of AI, are being applied increasingly often — across society — in an effort to improve how we live. If AI can truly make society better, I wondered, what would really be the harm in that intelligence becoming smarter than us? If computers “woke up” one day and realized they could manage human society more efficiently or more fairly than we’ve been able to, and acted on that realization, would it be so bad?

As humans, we’re viscerally distressed with the idea that artificial intelligence might take over; something about removing humans from the loop makes us very uncomfortable. But what is it about human society and self-determination that we believe is so uniquely important?

Even absent a threat of any all-out AI takeover, this is a compelling question to consider as machine learning and other types of AI research continues to progress — because if we figure out what it is that we’re afraid of losing, we might do a better job of protecting it.

We’re not on the cusp of an autonomous AI takeover. Expert AI researchers agree that it will probably take centuries for “superintelligence” to arrive, and a recent Stanford report concurs:

Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind.

For the foreseeable future, “artificial intelligence” is really just a term to describe advanced analysis of massive datasets, and the models that use that data to identify patterns or make predictions about everything from traffic patterns to criminal justice outcomes. AI can’t think for itself — it’s taught by humans to perform tasks based on the “training data” that we provide, and these systems operate within parameters that we define. But this data often reflects unhealthy social dynamics, like race and gender-based discrimination, that can be easy to miss because we’ve become so desensitized to their presence in society.

And even if we never reach the sort of AI doomsday scenario popularized in sci-fi movies, the real risk of teaching computers to help us make decisions is that we’ll fail to imbue our algorithms with the very values we’d be most afraid of losing. As the Stanford report found:

…we are now at a crucial juncture in determining how to deploy AI-based technologies in ways that promote, not hinder, democratic values such as freedom, equality, and transparency … Policies should be evaluated as to whether they foster democratic values and equitable sharing of AI’s benefits, or concentrate power and benefits in the hands of a fortunate few.

This conclusion is absolutely correct, and it is one many advocates have been fighting to amplify. But what it assumes is that these values have meanings that can readily be translated into something computers can understand.

As humans, we can handle ambiguous concepts like “freedom,” “equality,” and “democratic values.” We seem to relish debating them, and a vast and deep body of scholarship on political philosophy reflects those debates. For example, in a democracy, should all people be treated equally, or can citizens of a country be treated “more equally” than noncitizens? Different understandings of this foundational concept lead to vastly different treatment of immigrants and refugees. For people of color — who have been marginalized throughout the history of the United States — does “equality” mean equal treatment under the law? Affirmative action to correct for historical bias? Proactive avoidance of using any characteristic that could be a proxy for race? And for how many generations should reparations be provided until society has corrected its current and former wrongs?

As appeals for algorithms and applications of AI to protect our societal values grow in number and volume — with my own voice included in the chorus — we need to recognize this will be really hard to do when we as a society don’t agree what exactly we want to protect in the first place. Which concrete version of our values do we codify in algorithms when we have been coasting on social and political ambiguity for so long?

Which concrete version of our values do we codify in algorithms when we have been coasting on social and political ambiguity for so long?

One path is to simply to make sure computer decision-making doesn’t break any laws. Machines should uphold the Civil Rights Act, for example, and refrain from basing decisions on race, color, religion, gender, or national origin. Algorithms should also not undermine citizens’ right to due process by taking deliberation and transparency out of criminal sentencing, or be deployed in a way that violates the right to privacy.

This approach might seem obvious, but it’s not as simple as it seems. As any first-year law student will tell you, legislative ambiguity can reveal unresolved disagreements in laws and regulations that often take courts — which themselves use different approaches to statutory interpretation — years to untangle. This legal process has allowed our society to shuffle slowly forward in ensuring our values, as reflected in our laws, are relatively concrete and protectable.

But the process gets tricky when it’s an algorithm that does something legally questionable. Even for laws and guidelines that have been sufficiently clarified through case law and analysis — such as Title VII’s prohibitions on discrimination in employment — complying with the letter of the law in machine learning applications can be difficult. As it turns out, it’s computationally tricky to differentiate meaningful statistical patterns that can inform positive social interventions from ones that rely on variables that are actually proxies for protected classes in a way that reinforces traditional biases. (Using proxies risks causing “disparate impact,” a form of discrimination that’s legally prohibited even when there is no discriminatory motive).

The law is a living thing, and deceptively concrete. Suggesting that we should build our machines to follow the law is a sincere proposition, but also a naive one.

Even when laws are clear, not all of our values are actually reflected in law.

When Facebook decided to remove a historically meaningful photo that, upon first review, did not conform to the platform’s community standards, it did not break the law. The company is well within its rights to decide what content is published and what to take down in this case. Yet many of us are profoundly uncomfortable with any sort of content moderation system that does not appear to protect the value of free expression.

This particular decision was a human one, and there may be similar cases where other laws or contractual obligations affect how a platform treats content or users. But as algorithms are developed and deployed to make similar sorts of judgement calls, how do we decide which values beyond the law are worth teaching the machines?

It’s increasingly clear that accuracy of computer decisions often comes at the expense of our ability to interpret those decisions, meaning we won’t necessarily be able to correct computers when we realize they’ve done something we’re uncomfortable with. If accepting this tradeoff becomes the norm, not only will we have given up our ability to understand the automated decisions that impact society, we will give up our ability to engage in constructive social debates about what values the computers ought to be protecting in the first place, because we won’t know what values they understand.

Several major technology companies recently announced a joint effort to create a standard of ethics around artificial intelligence, similar to existing self-regulatory initiatives to protect human rights. The companies will theoretically come together to make sure their respective AI work minimizes harm to people and societies. Though the intentions may be good, we should be skeptical that this group of companies can unilaterally define what values, and what expression of those values, are worth protecting — particularly when most of these companies don’t reflect the country’s diversity of backgrounds, let alone the diversity of opinions.

As we teach our machines about ourselves and define how they will operate, we have the opportunity not only to correct past biases, but imbue even stronger protections for people and rights.

As we teach our machines about ourselves and define how they will operate, we have the opportunity not only to correct past biases and imbue even stronger protections for people and rights. Unfortunately, the larger debate over what about humanity is worth cultivating in the face of technological advances is being obscured behind jargon-y and sometimes intimidating terms like “AI” and “algorithms.” Even before more algorithms inadvertently codify inequality through automated decisions, the framing of this debate may already be marginalizing large swaths of society. If we don’t broaden the scope of the conversation about how our core values should be protected in the face of increasing automation, millions of people could be left out of a critical, national conversation on what it means to be human, and what about being human we want to preserve.

--

--