Ethics, ethics, and ethics!

Reaction Paper #4 for Artificial Intelligence Law and Policy

ys
trialnerr0r
6 min readMar 28, 2019

--

The plurality of ethics, why I found ethics troublesome

Weeks ago, we had a light discussion over “obsession with ethics”. “Obsession” means “a persistent disturbing preoccupation with an often unreasonable idea or feeling”[1]. I chose the word despite knowing obsession could come off as slightly negative. But at first, I really cannot understand why most of the mainstream approaches emphasize the importance of ethics. Why are we choosing ethics, such thing known for its fluidity and plurality, to be the core of our remedies?

To me, the plurality of ethics is inevitable. Ethics are about right or wrong, moral and immoral. We could agree on facts since those could be assessed by objective matrixes. However, it is hard for us to do the same thing with ethics. Ethics are fluid and are based on different cultures and beliefs. We might agree on one thing is ethical, but have different views on other subjects, simply because our cultures value different aspects. If we were to agree that cultures are equal and respect diversity, there will always be more than a set of ethics. The diversity of ethics could be inspiring but it could also be quite troublesome. Even within communities that share common ethics “there will be scope for disagreement”[2] let along different ethics could bare “when it includes constituencies that categorically condemn various acts and practices as compromising human dignity.” [3] In short, the uncertain scope of ethics and the fluid nature of such is the mean reason I originally skeptical toward ethics-based approaches

Why ethics?

To explain why we are seeing all the ethical guidelines, I traced back to where I was first introduced to the idea that machines should be ethical. And I surprisingly found the answer (or at least something that helps make sense of the whole scene). The first advocator of ethical machines I’ve known is Ronald Arkins, a professor who specializes in robotics and has worked with Defense Advanced Research Projects Agency (DARPA). His past works tackled autonomous weapons, lethal ones. Since they are lethal autonomous units that could be used in wartime, they try to code the principles of International Humanitarian Law into the units. I don’t know how those units perform in reality, but in the book, Governing Lethal Autonomous Units, they did manage to propose a set of logic that is designed to incorporate the principle of discrimination and etc. I was thrilled to find out humanitarian law could be written into machines during my first read, so even when it feels slightly odd that Arkins working closely with the law is an active advocator for ethical robots, I did not think much into it.

Sometimes after, it now strikes me that why people choose ethics might because what these people want to achieve is beyond legal approaches. As long as states comply with war law such things now are not inherently illegal, traditional approaches such as courts would not be helpful. They, therefore, resort to something that naturally would be a higher requirement than law, that later turns out to be ethics (at least to me complying with moral standards is harder than simply not violating regulations).

There are two approaches to end legal things one does not desire, we either stop the conducts with forces, or we reason and appeal to stakeholders with incentives. We see private sectors calling regulations[4], but that is quite an uncommon case. Most times, scientists respond to situations they do not like by trying to develop with more awareness in mind. To advocate the awareness, initiators first need a way to refer to this “better than status quo” developing requirements. The requirements are definitely legal, but more than just legal. They are asking developers to bear more obligations compared to the existing norms for the greater good and to have social impacts in mind. It seems reasonable for them to use “ethical” to describe their ambitions since “ethics tends to suggest aspects of universal fairness and the question of whether or not an action is responsible”[5]. Hence, ethics seem to be a good way to frame the sets of rules that ask stakeholders to be more responsible for their work. (In a more cynical way, “being ethical” sounds better than “bearing some extra obligations for no reason”.)

Should regulation be ethics-based?

Institute of Electrical and Electronics Engineers(IEEE) released ethnically-aligned design in 2017. European Union is to release the final version of Draft Ethics guidelines for trustworthy AI later this year[6], it is quite clear regulators are adopting similar stands. With the right regulatory techniques, suggested by Yeung, we would be able to create behavioral changes and to re-shape the society (ideally speaking). But if Brownsword was right that to have a moral community to flourish, individuals must have the capacity for genuine choice, is promoting ethics frameworks arbitrarily determined by someone other than people themselves, mostly technologist elites[7], ethical?

A reverse takes on that question would also be tricky; would it be ethical for experts to know there are better choices but decide to not do anything? An autonomous vehicle research reveals that “users do not actually want the car to drive like they drive. Instead, they want the car to drive like (the style) they think they drive.”[8] The study shows there is a gap between users’ understandings of their preferences and reality. Should we let users choose what they think they like most or should experts interfere and make better choices, which would be more ethical?

The questions above highlight some downsides, the plurality and uncertainty of ethics, of ethical approaches. Brownsword also pointed out those could lack democratic legitimacy since the sets of ethics chosen are not validated by citizens. With all that in mind, even though I learn more about where ethics-based approaches come from and what they wish to achieve, I remain a skeptic toward ethical guidelines. I still strongly believe stakeholders should be held accountable, may it be having more transparency or adopting specialized tort liability, but I think we need to translate the abstract ethics into requirements (which I know takes tremendous amount of work) or less shifty language, so we can really talk about what are the context and plans we desire. With concrete plans, it would be easier for legislative institutes to discuss the feasibility and how we can make stakeholders on abroad; it would be easier for parliaments to review principle-translated tasks and would solve the lack of democratic legitimacy problem at the same time.

[1] Merriam Webster, Obsession, https://www.merriam-webster.com/dictionary/obsession

[2] Roger Brownsword, So What Does the World Need Now? Reflections on Regulating Technologies, p33

[3] Id., p32

[4] Andrew Buncombe, Microsoft calls for regulation of face recognition technology after admitting it could discriminate against women and people of colour, https://www.independent.co.uk/life-style/gadgets-and-tech/news/microsoft-google-ai-facial-recognition-technology-brad-smith-discrimination-civil-liberties-us-a8673181.html

[5] Merriam Webster, Ethics vs Morals: Is there a difference?, https://www.merriam-webster.com/dictionary/ethics#note-1

[6] European Commission, Draft Ethics guidelines for trustworthy AI, https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai

[7] I am aware that we all are working for inclusions and fighting for diversity, this line is not to undermine anyone’s work but to prompt a question.

[8] Chandrayee Basu, Qian Yang & David Hungerman, Do You Want Your Autonomous Car To Drive Like You?, HRI ’17

Epilogue

Despite my skepticism, here are some powerful urges for ethics code by amazing Prof. Crowford.

You and AI — The Politics of AI by Kate Crawford
You and AI — The Politics of AI by Kate Crawford

--

--