AI Justice: When AI Principles Are Not Enough

The following is a personal narrative of lessons learned from my research. For a more concise look at the research, see the project announcement at casbs.stanford.edu.

Şerife (Sherry) Wong
7 min readAug 5, 2019
Collage: visually repetitive stock images found online of robot hands divinely reaching out to human hands or shaking hands

Şerife Wong, Faith and Trust, 2019, collage of images found during research

Fluxus Landscape is an art and research project mapping about 500 stakeholders and actors in AI ethics and governance. It casts a broad net and each included stakeholder defines artificial intelligence and ethics in their own terms. Together, they create a snapshot of the organic structure of social change — showing us that development at speed can create vortices of thought and intellectual dead zones open to exploitation.

During this research, a large number of AI ethics principles were published by public and private institutions (you may view a comparison of principles in the Berkman Klein Center Cyberlaw Clinic’s visualization, AI researcher Yi Zeng’s analysis of over 40 principles, or an analysis of 84 such documents from the Health Ethics & Policy Lab, ETH Zurich). I read many of these as they were released. When you hear a word over and over it tends to dilute its meaning. Accordingly, after seeing so many frameworks for ethics, the phrase “AI ethics,” which had been a guiding objective in my life, began to lose its meaning. At the same time, discussions on “ethics washing,” or fake ethics, started to trend in my Twitter feed.

I read hundreds of well-thought-out plans and carefully worded strategies for how humanity would reap the benefits of AI while buffering against negative societal impacts. The same glib effect was also true as many of the often-used phrases in AI ethics — responsible AI, trustworthy AI, AI for good, beneficial AI — began to become meaningless and sound like marketing terms.

In art, we learn to draw more realistically by shifting focus from the object we are drawing to see the negative space around it. This way, we use our eyes instead of being biased by what we think an object looks like in our minds. When I removed the mental picture of how I had conceptualized AI ethics, I was able to see what was left behind in the negative space. When words are neutered of their meaning, what is left behind is everything else. How or in what context are we using these words? What purpose do these words serve?

The principles, roadmaps, charters, and ethics codes were helpful, many were deeply thought out and educational; but I could not ignore that they were also part of a larger mechanism at work. In my steady march through the endless internet, I saw a new shape emerge from the negative space: it was not about ethics; rather, it was about assuaging public fears in order to sell AI. Behind the talk of ethics were corporate investment strategies and national economic agendas.

Conflicts and Consensus

This does not mean that all of AI ethics is about money and power. Despite the AI hype, the potential shifts of power are taken seriously by both institutions and people. Most of the time, individuals working in AI policy and ethics are working hard to do right by the world. This was true in every sector I researched: non-profit, for-profit, small consultancies, academia, government, even in the most commercial enterprises. I saw company leaders listening to bottom-up AI ethics ideas from their employees, and AI researchers engaging the public in ethics meetups. However, even these good intentions are in conflict. Each of them has a different context, a different interpretive lens, a different definition of ethics. As we move from publishing principles to regulating AI, major conflicts will emerge.

The ingredients for escalating tensions are in place. For example, multiple organizations represented on the map are involved in the use of AI for warfare: the International Committee for Robot Arms Control advocates for the prohibition of lethal autonomous weapons; groups like Google Walkout and the Tech Workers Coalition were part of a successful protest to pressure Google to drop its work on the military’s Project Maven; the Defense Innovation Board, chaired by former Google CEO Eric Schmidt, has been convening members from academia and industry in an effort to publish AI principles this year. At a recent meeting, we heard from members of the public such as AI researcher Toby Walsh, whose letter calling for a ban on these weapons gathered 20,000 signatures, over 4000 from the AI research community, against the use of offensive autonomous weapons. Among his concerns are how brittle and opaque these systems remain. Meanwhile, CNA, a federally funded research center that advises the military, is advocating for the lethal use of autonomous weapons on ethical grounds. The United Nations has been convening experts under the Convention on Conventional Weapons to assess emerging technologies in lethal autonomous weapons systems and find consensus among 125 member states on their definition and use. We must also consider wider touchpoints such as the newly minted Joint Artificial Intelligence Center (JAIC) which has a mandate to accelerate the adoption of broad AI/ML technologies. There are also companies that are bidding for these contracts, individuals who have quit or vowed not to work on developing AI for war, non-US government initiatives, and their sources of funding. Where will this lead us next?

These and many other organizations have competing interests, or maybe they potentially can achieve consensus. They might turn into future partners. There may be future compromises. We tend to underestimate the role of randomness in life; tensions might escalate and damage fragile relationships among governments.

Symptoms of Inequality

Imagine in the future a refugee crisis driven by climate change and compounded by AI labor disruption, and the resulting political, economic, and ideological consequences. Have we faced challenges like this before? Complications with ethics are as old as humanity. AI infuses these struggles with new complexity, as the pace of innovation in this field is unprecedented. If we are not able to navigate these societal changes effectively, the resulting increase in human suffering will also unfold at an unprecedented speed and scale.

Much of what I have observed in this landscape deals with focusing on AI’s impact on society. Algorithmic systems show up in every aspect of our lives: translating our languages, recognizing our voice commands, showing us advertising, and tagging our friends and family in our photos. Many of these automated decision-making systems have dark consequences as their use in criminal justice, banking, social welfare, employment, and other areas disproportionately have negative impacts on marginalized populations. These systems are built on data that reflect the biases of our society and perpetuate them with computerized efficiency. The tech industry, governments, and foundations are funding research and releasing toolkits to navigate these problems. However, although bias in AI must be addressed, it is also symptomatic of greater societal bias. We should not make the mistake of attending to symptoms without acknowledging underlying systemic inequalities and combating disparities in economic and political power. If we perpetuate the status quo of an infrastructure that gambles our survival as a species on the largesse of the industry and the wealthy then our AI policies will surely fail.

AI Justice

As my research progressed, I found myself spending more time looking at organizations working in digital and human rights. They too use recurring words: equity, justice, poverty, racism, and gender. The effect of increased exposure to human rights organizations contrasted with my former experience of overexposure to AI principles. I felt hopeful again.

Justice is a framework that goes beyond the policy developments necessary to govern AI. Exploring the negative space around AI ethics as a proxy for the social structures of modern justice is about looking to our possible future while being grounded in our history. Justice framing in digital rights is not new, and AI justice as a concept echoes throughout the map. We’ve seen it in Joy Buolomwini’s Algorithmic Justice League, ProPublica’s investigative journalism into criminal justice, in Electronic Frontier Foundation and ACLU’s work on fighting algorithmic decision-making systems in criminal justice, in workshops and talks at RightsCon, and many more. Justice as a framework for AI is also increasing in use. In May 2019, there was a discussion at NYU on Co-opting AI: Justice, and just recently, a meetup on AI Justice was held in New York.

Justice centers the conversation not on technology but instead on the people that have been impacted by the use of AI systems. This includes harms from the use of machine learning used in social media propaganda and the use of AI-enabled surveillance systems. The map similarly includes some of these voices as part of the landscape of AI ethics and governance. For example, included on the map are three Uyghur human rights advocacy groups: Uyghur Human Rights Project, The World Uyghur Congress, and Xinxiang Victim Database, the latter a grassroots project that has collected almost 5000 personal testimonies documenting the human cost of the nearly three million people reportedly held in “re-education centers”. Ethnic cleansing using both traditional control systems and a digital panopticon enabled by sophisticated surveillance tools is a pernicious new social impact of AI. The export of these surveillance technologies to any government or private entity that can afford to pay will be hard to undo.

It takes bravery to fight for justice in the face of power. That means it takes humans. An institution cannot be brave. It cannot wake one morning, look itself in the mirror, and commit itself to make a selfless change. But individuals can change and they do so constantly. Cultivating the arc of justice requires the will (and attention) of people. We are the creators of our institutions, culture, and tools; and we are capable of changing them. We may not be in control of the AI of the future, but we are in control today. Let us question what purpose we want our AI policies to serve and look to organizations that have long been working in human rights and social justice for guidance. Until we develop AI that can solve its own regulation, we are, for better or worse, the only ones who can.

--

--