How to master the craft of Applied Ethics in AI

Catalina Butnaru
Applied Artificial Intelligence
6 min readJan 13, 2019

--

Photo by rawpixel on Unsplash.

This is for anyone who pursues fairness and the advancement of wellbeing in their profession as AI Ethicists but wants to develop the skill of applying ethical frameworks in products.

Applied Ethics in AI is similar to mastering a craft: it requires focus, skills, experience, and a mindset similar to that of a designer.

The illusion of authority in AI Ethics

In 2016 the first academics to talk about AI ethics were criticized for trying to reinvent Asimov’s Laws of Robotics and for believing that non-engineers can have anything important to say about AI. Most fellows in IEEE are actually engineers by training and came from very different backgrounds and countries. The working groups focused on re-defining concepts such as transparency, accountability, responsibility, and wellbeing. I was probably the least trained and experienced of all, but I thought I was onto something and pursued it relentlessly.

In 2017 AI Ethicists raced each other to evangelize their ethical frameworks. Companies such as Microsoft, PwC, Google, and Accenture have also joined the movement of “defining the standards in AI Ethics”. A plethora of Ethical principles and frameworks started to be disseminated publicly. Everyone seemed to have an ethical framework by the end of 2018.

Thankfully, I dropped out of that race early because I knew my limitations: I was not an ethicist. Also, I lean towards the moral relativism camp — there is no single ethical framework to rule them all, AI Ethics depends on the industry, social norms, and regulatory landscape governing that industry.

Unlike moral relativists, however, I do recognize the need to reach a consensus on how we define concepts such as explainability, embedded morality, agency, autonomy, and so on. This means I believe there are fundamental principles which are usually shared across cultures and industries. And those usually are reflected across 80% or more of ethical frameworks out there.

Even if we do not share the same ethical framework, we need to speak the same language in AI Ethics. For example — what exactly does it mean to “trust” AI?

In reality, those who are in the pursuit of an Ethical Framework will need to balance normative relativism with corporate responsibility and broadly accepted standards.

Normative relativism refers to differences in what is considered ethically permissible in civil laws. For example, the European Commission published this Draft Ethics Guidelines for Trustworthy AI, but other institutions governing other regions might disagree.

Standards are still being defined, but you do have several official guidelines to work with, developed over the past years.

Once you look at Corporate Responsibility, you dive into muddy waters. At times corporate responsibility enforces that employees take responsibility for delivering products and services that protect human wellbeing. In other cases, it’s more a public perception exercise.

The Beginner’s Mind in AI Ethics

In the old days — self-proclaimed experts and pioneers loved illuminating the layman with books on “singularities “— technological, economic — and on our relationship with future “AI overlords”. These great men were usually… men. Sitting in panels, giving keynotes, shaking hands, chatting away at lush and exclusive events with the world’s richest and famous.

A manel (panel full of men) discussing Superintelligence and its impact on humanity.

In the meantime, the most valuable and practical advancements in AI Ethics were made by brilliant but not-so-famous people whose names we hear today: Prof. Kate Crawford, Prof. Virginia Dignum, Prof. Alan Winfield, John Havens, Joanna Bryson, Catherine Muller. And here’s a list of 100 women who have contributed to the practice of AI Ethics.

Although I was mentioned on that list, I am not that special. In fact, what I’d like to encourage you to think about is that we share the same mindset and the same goals. That’s huge.

In Buddhism, the beginner's mind (Shoshin) refers to being eager to learn the new by unlearning the old. AI Ethics is so young relative to AI as a branch of computer science, that we are all more or less dabbling in the dark here.

Armed with the right mindset, you can master the craft of applied Ethics in AI. No matter how small your progress today, you get to hold the torch for future generations.

What do you need to do to apply Ethics in AI

What do you need to learn before applying Ethics in AI?

  1. You need to have a good understanding of how different institutions and organizations take action towards defining, enforcing and auditing what is ethically permissible today.
  2. Understand which ethical framework is most relevant to your field. Not all guidelines you will find out there are relevant. If AI is applied in the field of media and communications, there’s more to be concerned about the impact of AI on free expression, unwanted impersonation (deep fakes), and fake news, as opposed to technical displacement by automation.
  3. Understand how each Principle is relevant to your team’s technical design of AI or AS (autonomous systems). What does it mean to satisfy the Principle of Transparency when you are using a combination of RNN and decision trees to diagnose primary diseases?

What do you need to practice to master applied Ethics?

  1. Practice having multiple hats and hanging perspectives: what does the engineering team need from me? — Probably very clear requirements. What do stakeholders need from me? — They need me to measure the impact of an ethically designed product on shareholder value. They need to trust the team that the company’s reputation will not be impacted negatively. What does the design and research team need from me? They probably need to understand how to translate an Ethical Framework into design sprints. Read Future Ethics, by Cennydd Bowles.
  2. Minimum Ethical Product. Prepare your team for execution by helping them break down Applied Ethics into manageable, deliverable chunks. Deliver a Minimum Ethical Product, gather feedback and evidence, and build on that. Move forward. Ethics will ALWAYS change. The Market will react. But you CAN start with a minimum ethical product, instead of waiting for regulations to force a point of view on your team’s thinking.

Who can help you in your journey?

  1. You can greatly benefit from working with professionals trained to understand the impact of technology on society. They might help you exercise your team’s moral muscle until you get better at distinguishing possible scenarios from preferable scenarios you can control.
  2. Make room for team exercises designed to map out your core values and how each team member can be responsible for taking action on that core value. For example, if transparency is important to the team — then that can be exercised by engineers as aiming towards full explainability when designing and training AI, and it can also be exercised by your marketing team in the way they communicate with users.
  3. Your team. Don’t let Ethics in AI be yet another Diversity Program with false metrics and surface-level activities that further stigmatize and victimize minorities. Your team needs to understand why Ethics is not only necessary but empowering as well. I’ve noticed time and time again that people tend to be intellectually married to their job, even when their job is clearly hurting people. Advertising professionals will do anything to get that click, even if it involves deceiving people.

If your team embraces applied Ethics as a tool to be better at their job, then you’re more likely to succeed than if you only enforce “AI Ethics” as a team responsibility.

If you liked this article but need more help, please get in touch -

. I can help you with aligning your product and design strategy with an ethical framework that works for your team and industry.

You can also reach out to these professionals in who have experience helping teams apply ethics. Or follow our Ethics Track

and join our community events in 50+ cities around the world.

--

--

Catalina Butnaru
Applied Artificial Intelligence

City AI London and Women in AI Ambassador | Product Marketing | AI Ethics | INFJ