Emma Day
Tech Legality
Published in
4 min readOct 11, 2023

--

Heroes or cowards? Ethics and tech through a human rights lens

We focus in our work at Tech Legality on technology and human rights, and I have been wondering lately whether the term ‘ethics’ used in the context of technology is something of a trojan horse.

Ethics is increasingly used as the banner for discussions about the impacts of technology on society, and I am concerned that this results in bypassing some key principles of international human rights law, and frameworks such as the UN Guiding Principles on Business and Human Rights.

I confess I am especially skeptical because the framing of technology and society issues in terms of ‘ethics’ seems to emanate from Silicon Valley — the powerhouse of Big Tech, and the United States which is not a jurisdiction that recognises international human rights law in its judicial systems.

That is why when I saw the opportunity to set up a reading group to follow the Stanford Course on Ethics, Technology, and Public Policy for Practitioners, I jumped at the chance.

(All ideas expressed in this blog are my own, and others in the reading group may well have different views. While Tech Legality is facilitating this group, the group members are not affiliated with us)

Our reading group — left to right from top Valentina Vivallo, Alexander Laufer, Ni Putu Candra Dewi, Kruakae Pothong, Andrea Olivares Jones, Lan Shiow Tsai, Allan Maleche, Veronique Lerch, Gemma Brown, Louise Hooper, Ananya Ramani, Stephanie Haven, Stacey Cram, Rachel Chambers, Paul Roberts, Elena Abrusci, Daisy Johnson, Sabine K Witting, Fiona Iliff, Clare Daly, Laura Berton, Ayca Atabey, Lama Almoayed, Esteban Ponce de Leon, Mark Leiser, Emma Day, Trisha Ray, Jean Le Roux.

Our reading group is made up of 30+ individuals with diverse backgrounds who are all keen to explore the topic of ethics and technology through a human rights lens. We bring expertise in different areas of human rights law, privacy law, ethics from non-tech disciplines, UX design, and tech company practice, and we are joining from many corners of the world.

Week one of the Stanford course was focused on the story of The Ones Who Walk Away From Omelas, by Ursula LeGuin. To summarise:

In the city of Omelas they are not simple folk, but they are happy. There was no King, they do not use swords, and they do not keep slaves. There is no guilt in Omelas.

In a dark and dirty cellar there is a child left alone. Everyone in Omelas knows the child is here, and they all understand that all of the beauty and good things in their world depend wholly on this child’s misery. If the child were to be freed, then Omelas would wither and everything would be destroyed. These are the strict and absolute terms of living in Omelas. But after visiting the child, some people leave Omelas, and they never come back.

The Stanford course instructions were for us to discuss:

Are the people who leave Omelas heroes or cowards?

Nobody seemed to categorically feel the people who left Omelas were either heroes or villains. Some people in fact wondered if morally, maybe it was worth the suffering of a single child to allow the rest of society to be happy, healthy, and free. Many others felt this was deeply immoral… But who gets to decide?

In our discussions we had several analogies suggested for what Omelas represents: for some people Omelas represented the experience of working within a tech company, and deciding whether to stay and try to fix harmful impacts the company is ignoring yet profiting from, or to leave in despair; for others Omelas symbolises capitalism itself, and the system most of us live in, which depends on exploitation of others, but is very difficult to either fix or escape.

In terms of ethics, some of us felt that ethics are somewhat akin to moral values, and in many contexts we know that we need more than these to prevent the worst atrocities from occurring. We discussed whether maybe industry self-regulation is like a moral values-based system, in contrast to regulations which are based on (ideally) enforceable and accountable systems of democratic government.

A supplementary reading from the Stanford course materials went into details about some of the contexts where ethics have proven not to be enough to prevent atrocities from happening.

“The self-regulatory mechanisms governing moral conduct do not come into play unless they are activated and there are many psychosocial mechanisms by which moral self-sanctions are selectively disengaged from inhumane conduct.”

Examples given are where people morally justify violence in the name of ideology, religion, and nationalism; or when sanitised language is used to distance people from harms caused, such as where civilians killed by bombs are referred to as “collateral damage”; or where people minimise their own agency in the harm they cause, or where responsibility is diffused amongst many.

So far our group has more questions than answers. Some of the questions we are taking with us into next week include:

  • Are ethics equal to morals?
  • When are ethics not enough?
  • What is the difference between ethics and human rights?
  • Is the international human rights law framework enough? Or do we sometimes also need ethics to supplement this?

Next week we are reading about algorithmic decision-making and bias. More soon.

#EthicsTechPolicy #ResponsibleAI #EthicsandTech #HumanRightsandTech @Tech Legality

--

--

Emma Day
Tech Legality

Human rights lawyer, specialist in law and technology, and co-founder of Tech Legality. UK raised, worked for 20 years in Asia, Africa, and North America.