It’s not enough for AI to be “ethical”; it must also be “rights respecting”

Vivek Krishnamurthy
Berkman Klein Center Collection
4 min readOct 10, 2018

--

By Hannah Hilligoss, Filippo A. Raso, and Vivek Krishnamurthy

How many congressional hearings will it take to get big tech to do the right thing? Apparently, at least one more. Last month, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey trekked to Washington to answer, yet again, for their platforms’ roles in spreading misinformation. Both bandied artificial intelligence as a tool for combating misinformation at scale, but automation brings with it challenges of its own.

AI-assisted decision making in the realms of criminal justice, hiring, lending, and more have been criticized for embedding and perpetuating patterns of human bias and discrimination. This is most obvious in the criminal justice system. Several years ago, ProPublica found that a ubiquitous risk assessment tool, COMPAS, treated African Americans differently than others: the algorithm misclassified them as having a “high risk of reoffending” at twice the rate of white Americans.

In response to these critiques, corporate leaders and their employees have been making very public commitments about injecting ethics into the deployment of AI systems for applications ranging from content moderation to credit scoring. Increasingly, however, governments, scholars, and activists are recognizing that ethics are not enough. Instead, AI must also be “rights respecting” to be socially acceptable.

Human rights and ethics might seem to be synonymous. They’re not.

Human rights are universally recognized in law. The Universal Declaration of Human Rights (UDHR) is the foundation for a multitude of human rights treaties, domestic laws, and constitutional provisions that are legally binding.

The human rights regime was developed through a transparent and legitimate process involving governments that represent the full range of the world’s ethnic, cultural, and religious diversity. Some might argue that the values encoded in human rights are not completely universal, but they’re certainly more representative, legitimate, and concerned with protecting the vulnerable than the ethical compasses of Silicon Valley CEOs.

Furthermore, there are internationally accepted best practices for protecting human rights. The UN Guiding Principles on Business and Human Rights (“Guiding Principles”) recognize the responsibility of corporations to respect human rights — principally by conducting due diligence into their human rights impacts — and to mitigate and remedy rights infringements arising from their operations.

Companies must follow the law; they need not behave ethically.

Following a human rights-based approach dispels concerns about whether tech companies actually care about the social implications of their products, or whether they’re just paying lip service.

Consider AI in the employment and recruitment context. Imagine, for example, that a tech company deploys AI in its hiring process to screen job applicants. If the company trains its AI to select new employees based on characteristics and skills of current or past successful employees, the algorithm is likely to “learn” what makes a good “tech bro” and to hire more of them.

Reasonable people can disagree as to whether this is an ethically tenable state of affairs. Some could argue that there’s nothing wrong with a company using AI to automate the hiring of new staff who share the characteristics of successful current employees. Others with a different conception of ethics will think differently, for there are as many ethical viewpoints as there are people.

By contrast, human rights law provides us with a much clearer path out of this ethical morass. The corporate responsibility to respect human rights requires them to avoid adversely impacting the right of all people to equality and non-discrimination. From a human rights perspective, addressing the discriminatory impacts of the algorithm in the scenario sketched out above is not a matter of ethical discretion, but of legal obligation.

Universal human rights vs. situational ethics

We are all in favor of companies and engineers embracing ethics, but we fear that it isn’t enough. Given the widely divergent views of what constitutes ethical conduct, an approach to AI that is solely based on ethics risks hard-wiring the biases of the developers of such systems into them.

By contrast, the human rights regime provides the clarity and certainty of law. It transforms voluntary promises of ethical behavior into mandatory requirements for compliance with an established body of law. It lays down procedures for determining when human rights have been violated and provides individuals with redress and remedy when such violations have been established.

Of course, a human rights approach to AI is not infallible, and ethics certainly has an important place in the conversation. Even so, human rights deserves pride of place among the navigational aids that are guiding the development and deployment of AI systems that advance the common good while recognizing the fundamental dignity of each and every one of us.

Hannah, Filippo, and Vivek are co-authors of Artificial Intelligence & Human Rights: Opportunities & Risks, a research report from Harvard University’s Berkman Klein Center.

--

--

Vivek Krishnamurthy
Berkman Klein Center Collection

I write about the intersection between tech and human rights. I practice law at Foley Hoag LLP and teach at Harvard Law School.