The Promises and Perils of Artificial Intelligence: Why Human Rights and the Rule of Law Matter

The following think piece is based on research I conducted as an intern this past summer while working in Special Projects with Urs Gasser at the Berkman Klein Center for Internet & Society.

The Ubiquity of AI & the Possibility of a Different World

The impact of artificial intelligence is bound to be massive and far-reaching. Artificially intelligent systems might eventually decide who can access healthcare or not, or who qualifies for a job or mortgage or not. How can we trust artificial intelligence (AI) systems to decide impartially? When governments or states use AI, should the source code be open for public inspection, or closed and subject to some type of testing?

And how will courts treat challenges to the impartiality of AI technologies? Who or what is at fault when AI technologies fail? How might judges and juries use evidence produced by AI? And to what extent should courts rely on AI as a tool to regulate behavior when it comes to crucial decisions such as criminal sentencing?

AI also threatens more than our decision-making processes: AI could replace us. Researchers at the University of Oxford estimate that around half of all US jobs may be at risk in upcoming decades, with low-wage occupations being the most vulnerable. As a recent Forbes op ed highlighted, one pervasive fear is that AI technologies will “outmaneuver humans out of their game.”

Despite the risks that AI poses to our legal systems, we have at least two reasons to be hopeful. First, stakeholders of all kinds — ranging from government entities, academics to entrepreneurs — have debunked the claim that AI technologies are about to achieve ‘technological singularity’, or even superintelligent General AI. In an upcoming Medium article written by myself and fellow Berkman Klein intern Aida Joaquin Acosta, we further challenge the myth that AI has already attained superhuman, wholly autonomous capabilities.

Second, in the words of Shalil Shetty, “there is also the possibility of a different world” than one where AI clandestinely controls our behavior or exploits the vulnerable. Indeed, initiatives like the Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund and conferences such as the 2016 White House-backed AI Now Symposium all signal a strong interest in — and crucial need for — the development of AI technologies in the public interest.

As artificial intelligence augments human decision-making, I argue that legislators and policymakers must account for the myriad risks that AI poses to human rights principles and the notion of the ‘rule of law’. This remains particularly true when we observe the ways in which AI stands in for us or nudges us. And how regulators conceive of the contours and purpose of regulatory systems will invariably affect the potential for AI to preserve human rights norms and the rule of law.

In this article, I explore several legal and human rights issues raised by the evolution of AI technologies. What follows are vignettes or threads of interest, strung together by a fascination I developed this summer with respect to narratives as to what constitutes AI, tools for regulating behavior (including technology), and the role of participatory design as AI technologies evolve particularly in constitutional democracies.

This exploration is by no means exhaustive. Instead, I hope to offer regulators the means to begin making sense of vital issues as AI becomes ubiquitous in our societies.

Mapping the Issues: The Broad Scope of AI

How we define artificial intelligence shapes how we respond to it. By definition, artificial intelligence involves technology that perceives elements of its own environment in hopes of successfully achieving some specific goal—generally through replication of at least one of four notions of ‘intelligence’: human performance, reasoning, thought processes, or in pursuit of an idealized notion of rationality (Norvig and Russell at 4).

Drawing by Santiago Ramón y Cajal (1899) of neurons in the pigeon cerebellum, public domain

Technology that already takes up the methods of artificial intelligence isn’t new, and equally warrants regulation as much as emerging AI technologies. Indeed, much technology that is already embedded in our everyday lives has come to employ AI techniques. These tools, such as algorithms, may not have previously qualified as AI but also deserve regulation in order to develop in the public interest.

Take the example of algorithms and social media websites. Recall that an algorithm is a procedure for solving a mathematical problem in a finite number of steps, and often involves repetition of an operation. (Algorithms can, indeed, employ AI techniques, but do not always constitute artificial intelligence.)

Facebook has used an algorithm called EdgeRank since 2007, which measures affinity, weight, and time decay to determine what users would see in their“News Feed”. Then in 2011, Facebook stopped using EdgeRank only to turn to a far more complex machine learning algorithm based on around 100,000 factors to determine what users would see. Time and again, academics and media pundits alike have consistently criticized the social media website for failing to provide transparency and clarity as to the actual code that dictates which news feed content is shown to the company’s users.

The transparency and predictability of Facebook’s news feed algorithms really matter: a whopping 62 percent of Americans rely on such social media websites for their news consumption. And while Facebook’s latest explanation on its decision to use AI to counter “terrorist content” reveals some measure of commitment to transparency, such content regulation poses serious challenges to free expression in quasi-public spheres, and raises vital questions relating to the possibility for unfettered discrimination against vulnerable groups of people.

Further, what we may call an algorithm could be a functional equivalent to AI and also may raise equally important human rights risks. Take a look at the fact that predictive algorithms are increasingly being used to mine personal information such as credit history, and make guesses about individuals’ likely actions and risks, critically affecting people’s ability to retain informational self-determination and, in turn, obtain basic needs such as housing, work, loans, and insurance.

Consider as well the recent news that a company called hiQ Labs is suing LinkedIn to retain access to data that it gleaned from public profiles to algorithmically predict whether employees will quit. Such an algorithm that perceives elements of its environment in hopes of successfully achieving some goal — here, determining whether a person might quit their job — and through the replication of human reasoning, would seem to fall squarely in the exact definition set out by AI experts as to what constitutes artificial intelligence (at 4).

These events raise important questions: when do algorithms constitute artificial intelligence, and how should both legislators and policymakers ensure that such technologies develop in line with human rights norms?

Mapping the Issues: Why AI is Different

The ethical development and use of AI systems is especially important when we consider the possibility for such technologies to challenge human agency in two salient ways. “Agency” is understood here quite simply with its dictionary definition: “the capacity, condition or state of acting or of exerting power.”

With this in mind, an artificially intelligent system could disrupt human agency by depriving us of the capacity, state, or the conditions in which we can act in a self-determined way. More specifically, such technology begins to affect our decision-making processes either by standing in for us or by nudging us.

Technological developments have without a doubt fundamentally transformed our entire lives, yet arguably no other technology has presented as serious a threat to our agency or autonomy. Artificial intelligence seeks to replicate human intelligence, which therefore involves some measure of decision-making that is autonomous or independent from human instruction, and that is based on information the technology itself obtains and analyzes. As such, much is at stake when AI makes decisions for us or nudges us as we shall see below.

AI Standing in For Us

What happens when artificially intelligent entities replace us? Indeed, the replacement of human labor by AI systems is not so far off: a Japanese insurance company laid off 34 employees in early 2017, only to replace them with IBM’s AI Watson Explorer. And labor rights activists have criticized the recent merger between Amazon and Whole Foods for the potential loss of what some label ‘low-skilled’ jobs due to automation.

Another excellent example of the potential for AI to usurp our decision-making power involves autonomous vehicles. AI and labor expert Jerry Kaplan observed that long-haul trucking is but one example where increased automation will result in robots replacing humans: highways are the easiest roads to navigate without human intervention.

FANUC R-2000iB series robot by Mixabest, licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license

Companies are also increasingly relying on AI in hiring processes. “We can look at 4,000 candidates and within a few days whittle it down to the top 2% to 3%,” said the CEO of one company that shortlists candidates through the use of AI in the hiring process.

The relationship between such uses of AI technologies and human rights issues is clear. In the case of AI systems replacing humans in the workplace, there is palpable potential for such substitution to involve discrimination against vulnerable groups of people such as immigrants, people living in poverty, those with disabilities, and possibly even people of color.

And according to a 2016 McKinsey & Company report, jobs involving the performance of physical activities or operating machinery in a predictable environment are the most susceptible to automation—namely manufacturing, food service and accommodations, and retailing (the preparation of goods for consumption). As a consequence, workplace automation by AI systems may particularly affect vulnerable, working class populations, and other demographics of people who typically do such labor.

The Nudging of AI

Human rights norms are also prominent in instances where AI technology nudges us, (surreptitiously) encouraging us to act in one way or another. For example, recent news reports demonstrate the ability for machine learning to predict instances of schizophrenia. And consider the idea of the use of AI in judicial decision-making processes, which presents both considerable difficulties and positive potential in terms of serving the public interest.

Indeed, the human rights risks arising from the influence of AI systems on judicial reasoning are especially numerous. This phenomenon calls into question whether a person has received a fair trial, perhaps because the judge may have been swayed such that she perpetuates actual or implicit bias rooted in datasets that have been relied on by the AI system.

And another risk concerns whether AI is not or cannot be explained as a part of the reasons for the judge’s decision. This was at issue in Eric Loomis’ case: the Wisconsin Supreme Court ruled that a judge’s use of closed-source recidivism assessment software in sentencing does not necessarily breach the constitutional right to due process (in this case to challenge the software’s validity or accuracy), so long as the judge doesn’t rely on the score exclusively and receives written warnings about their value. The US Supreme Court denied judicial review of the decision in June 2017.

Other critical human rights can be triggered by a failure to afford a person a fair trial: consider the fact that a given trial can affect a person’s ability to be free from torture and, in some cases, to live.

By Daniel Bone, licensed under Creative Commons CC0 license

At the same time, we also know that AI systems might improve a judge’s ability to make decisions in at least two ways. For example, AI technologies have been shown to predict more accurately than judges whether defendants were a flight risk while they awaited trial. As one recent study revealed, AI technologies can therefore be useful to judges and society insofar as they might be able to contribute to lower crime rates and a reduction in jail populations.

AI algorithmic systems can also be used to identify other things that nudge a given judge: implicit biases as it relates to racial disparities, extraneous factors such as what a judge ate for lunch or the status of her favorite sports team are all decision-making factors on which AI systems can shed light.

Techno-Regulation: A Problem for the Rule of Law

Whether AI makes decisions for us, or prods us in a certain direction raises worthwhile questions regarding the relationships between technology, regulation, and the notion of the ‘rule of law’. What is the paradigm that law- and policymakers wish to see come to fruition as they create legal systems in response to AI technologies?

For our purposes, the rule of law can be understood as a principle requiring the law to be clear, publicly visible or known, and applicable to all people (including lawmakers themselves) (see Radbruch). A crucial element of the rule of law concerns the ability for people to contest the application or consequences of a given law, typically before a court of law (see Hildebrandt at 10).

If computer code operates as law, and is effectively regulating our behavior (according to the prominent school of thought made popular by Lawrence Lessig), in what ways might AI technologies result in the arbitrary, intransparent rule of a dominant group rather than upholding the principles of the rule of law? Recall, for example, the fact that judges might increasingly rely on determinations made by AI systems in order to determine a person’s risk for recidivism (but such algorithms might be protected by intellectual property law as trade secrets). In this way, AI technologies that influence judges also act as tools for behavior regulation.

Lawyer and legal historian Mireille Hildebrandt has also determined that emerging technological infrastructure such as artificial intelligence reconfigures our lives and de facto regulates our behavior. And for Hildebrandt, proponents of what she calls the ‘regulatory paradigm’ tend to frame the law as a neutral instrument for social engineering that can be freely replaced with other policy instruments.

The Neutral Face emoji, approved as part of Unicode 6.0 in 2010 and added to Emoji 1.0 in 2015

Yet Hildebrandt compellingly points to two major problems with such a neutral conception of the law. First, a neutral conception of the law frames human beings merely as rational agents who are trying to maximize their own utility, all while effectively and efficiently realizing specific policy goals — no matter the means of regulation, including de facto regulation afforded by technologies (164). Second, the regulatory paradigm is not about providing tools for citizens to challenge unreasonable governmental interference in a court of law. Instead, the regulatory paradigm simply aims to influence behavior, again in view of certain policy goals (165).

“Why not use technologies as neutral means to achieve policy goals,” Hildebrandt writes, “if they provide for more efficient and effective ways of guiding people? If the idea is to influence behavior instead of addressing action, and if the means are interchangeable, why not opt for a means that is less visible, less contestable and thus more influential in getting people to behave in alignment with policy objectives?” (see Hildebrandt at 165, emphasis added).

Hildebrandt goes on to explain that techno-regulation is a prime example of the ramifications of a regulatory paradigm that goes uncontested: replacing legal regulation with technical regulation may in fact be more efficient and effective. As long as the inner-workings of AI technologies are a part of hidden, unchallenged complexities, people will simply lack the means to contest any suspected infringement or manipulation of their rights.

Technology and Law in a Constitutional Democracy

But how might legislators and policymakers conceive of the purpose of the law, if not merely to achieve specific policy goals? One cogent solution is Hildebrandt’s pluralist or ‘relational conception’ of the law, which presumes a connection between the design or engineering of technology as well as how we specifically take up such technology, and the affordances or far-reaching capabilities of such technology in our lives.

This means that when law- and policymakers reconfigure our social and legal fabrics to account for AI technologies, they ought to first incorporate into their technological assessments the norms and values that members of society wish to sustain. Second, they need to scrutinize whether the affordances or potential usages of such technology will transform or disrupt these norms and values — which will require an “up-stream involvement of those who will suffer and enjoy the consequences” of the potential use of AI systems (at 172).

There are two significant potential benefits to employing a relational conception of law and technology.

First, a relational conception of the law can help guarantee the three hallmark legal norms in a constitutional democracy: self-rule, as legal rules are established by a democratically chosen legislator; disobedience, as such rules can be violated; and contestability, as the legal consequences can be contested in a court of law (Hildebrandt at 10).

According to the Center for Civic Education, constitutional democracy is the antithesis of arbitrary rule. It is democracy of, by, and for the people, such that all citizens — rather than favored individuals or groups — have the right to politically participate, and the fundamental rights of all individuals are protected.

By Nick Youngson, licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license

Legislators and policymakers can turn to emerging, worthwhile concepts such as MIT Media Lab’s Iyad Rahwan’s notion of ‘society-in-the-loop artificial intelligence’, which embeds the judgment of society as a whole in the development and use of AI systems, so that AI machines behave in ways that a given society would consider ethical. And society is crying out for technologists, computer scientists, engineers, and entrepreneurs to design AI systems that are safe, accepted, and trusted.

To this end, each AI system ought to be designed to allow those who use the system, subjects of the system, and relevant decision-makers to assess a given system’s accountability through the technology’s algorithms and data, responsibility through clear evaluations of causal relationships or chains of command, and transparency with respect to how AI systems’ algorithms do what they do.

In an ideal world, AI technologies would not only develop to facilitate self-rule, but would also function transparently so that we have the agency to reject any insidious nudging as well as challenge the legal consequences of any such technological process.

A second benefit to employing such a relational conception concerns the power of AI technologies to preserve of rule of law. In her writing on the notion of ‘moral crumple zones’, cultural anthropologist and AI researcher Madeleine Elish argues for reconfigured notions of moral and legal responsibility when it comes to human-robot interactions.

More specifically, Elish demonstrates the insufficiency of the traditional paradigm for determining responsibility, which relies on the amount of control exerted by the technology’s operator — such as the pilot of a plane. Instead, control in automated technology has become distributed across multiple actors, such as operators, manufacturers, software designers, and the software or hardware itself.

Depiction of a car’s crumple zone, public domain

Elish tells us that “[t]he result of this ambiguity is that [operators] may emerge as “liability sponges” or “moral crumple zones.” Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in an autonomous system may become simply a component — accidentally or intentionally — that is intended to bear the brunt of the moral and legal penalties when the overall system fails.” A case in point involves the Air France flight 447, where the official French report traced the cause of the crash to the pilots’ “total loss of cognitive control”, despite the plane’s autopilot processes appearing to have played role.

What is the root of the problem? A significant risk here is that any quick or readily available solution to the moral crumple zone—such as bestowing legal personhood on autonomous agents—may allow designers of such systems to dodge responsibility for any of their technological and design choices that may have contributed to the accident.

If the software designers write the code and algorithms that constitute an AI system, and if such code invariably functions as de facto regulation on our behavior, then policymakers ought to consider the benefits of holding liable the designers and engineers who construct the digital structures that invariably affect our behavior.

Beyond this, legislators and policymakers will likely need to further reconfigure our notions of moral and legal responsibility when software acts or writes part of itself in a way that goes beyond the designer’s explicit desires.

Conclusion: The Possibility of a Different World

As as engineers, researchers, and entrepreneurs develop AI technologies, regulators ought to monitor the specific ways in which AI threatens human rights principles and the rule of law.

Legislators and policymakers should carefully but swiftly define “artificially intelligent” technology in order to account for any human rights abuses facilitated by such technologies. This is because AI systems can challenge human agency either by standing in for us or nudging us, as well as ultimately disrupt the rule of law by forming a part of a given regulatory framework—but with such opacity that the logic of the engineer or machine cannot be challenged.

We must ensure that AI technologies develop so as to guarantee societal norms marked by self-rule, as well as the ability to both defy rules and contest their legal consequences. In so doing, we take critical steps in ensuring that AI technologies develop in the public interest.


Thanks to Urs Gasser, Gabriel Blue Cira, Natalie Pompe, Elena Sophie Drouin, Michael Lukaszuk, and another anonymous friend for helpful comments on this project.

Show your support

Clapping shows how much you appreciated Yuan Stevens’s story.