Not another set of ethical principles…

Aoife Spengeman
Wellcome Data
Published in
6 min readSep 20, 2019
Team word association on ethics

TL;DR

  • At Wellcome Data Labs, we have recently developed a set of ethical principles to guide our decision-making and development.
  • Principles alone cannot solve problems. We need to be proactive about making them useful in practice.
  • When put into practice, we expect there to be tensions and trade-offs between principles. We pay attention to tensions between principles, and we challenge ourselves to resolve them.
  • If we cannot resolve the tensions, we take note of the trade-offs. By trade-offs, we mean that we have prioritised the importance of one principle over another in certain situations.

Wellcome Data Labs is a small, fast-growing team within the Wellcome Trust that strives to build responsible, human-centred data products. As part of this process, we have recently set out to develop a set of ethical values and principles.

Here they are:

Autonomy — the power to decide

Our products should promote the autonomy of people and control the autonomy of the algorithms. They should not impair the freedom of individuals to set their own standards and norms.

Beneficence — do good

All that we work on should be underpinned by a clear benefit to the people it will affect. Human well-being should be prioritised as an outcome of all our products and services.

Non-maleficence — do no harm

We will do what is feasibly possible to prevent harmful usage or consequences as a result of our product.

Justice — be fair

We strive towards fair distribution of the benefits and burdens of our product among affected groups.

Explicability

Specific to machine-learning systems, our algorithms should be auditable, comprehensible/transparent and intelligible by humans at varying levels of comprehension and expertise.

Word association on ethics: Topical, bias, don’t, moral, problematic, important/critical, fairness safety, right/wrong, etc.
Team ideas of words associated with ‘ethics’

Why did we decide to have ethical principles?

We’re lucky enough to be a truly interdisciplinary team with diverse perspectives, but we learned early on that we need to unite on what constitutes ‘being ethical’ in practice. When faced with a range of options on ‘the right thing to do’, we first have to all agree what ‘right’ is. So we decided that we needed a set of ethical values and principles to guide our decision-making.

Okay, you have principles, so what will you do with them?

Since we now have norms and standards to follow, we don’t have to worry about irresponsible use of machine learning and other technologies, right? Afraid not. We are all too aware of the limitations of norm-setting standards like principles. They are often too general to be useful in guiding actions. So what is equally important is to implement them in a meaningful way.

Using tensions between principles to drive our focus

We decided to take a different approach to principles and values. I came across an excellent paper by Whittlestone et al. (2019), which has now become the basis for our team’s approach to ethical principles. Their approach recognises that ethical principles have their limitations, which inevitably will lead to contradictory tensions between them when put into practice.

I summarise the approach as follows:

  • Principles are highly general and therefore too broad to be action-guiding. Only principles that are narrower and more specific are likely to be useful in practice.
  • Principles come into conflict in practice. For example, some machine-learning black box systems are not explainable, but may be highly beneficial, such as a medical diagnosis system that is more accurate than doctors.
  • Different people may interpret principles differently. They can be ambiguous. Instead of bringing people together, they may mask important disagreements. For example, everyone may agree that fairness is important, but people may be deeply divided on what constitutes fairness. Or there may be disagreement on how to prioritise values over each other.

Taking on this approach, when we put our principles into practice we should expect, check for, and explore the tensions that arise. We will try to understand why implementing a principle is difficult.

It may be a case that tensions reflect a strict moral trade-off that we cannot reconcile, such as prioritising human autonomy over the potential benefit of a machine-learning system.

Or it may be as a result of technological or societal constraints. For example, it may not be the case that deep neural networks are inherently unexplainable, but that current methods are not enough. This is a constraint that may be resolvable with future research and innovation. For example, Nvidia’s self-driving car programme has developed a visualisation system to allow creators to ‘watch how the AI thinks’, therefore making algorithmic decision-making more transparent.

Using ‘wicked questions’ to highlight tensions between principles

Why did we choose these particular principles?

One of the first things my colleagues said to me when we first discussed having shared ethical values was, ‘Don’t reinvent the wheel!’. A simple Google search for ‘AI ethics principles’ will tell you this is a fast growing trend amongst corporations and research institutions.

Taking my colleagues’ advice, I reviewed the trends in ethical principles that already exist and came across an excellent paper by Floridi et al. (2018), An Ethical Framework for Good AI Society. This review found that of the 47 principles published, a lot showed overlap with basic values used in bioethics:

Beneficence

Non-maleficence

Autonomy

Justice

In addition to the four bioethics values, Floridi et al., page 696 (2018) also recommend the inclusion of explicability as a fifth value for AI ethics:

“But they are not exhaustive. On the basis of the following comparative analysis, we argue that one more, new principle is needed in addition: explicability, understood as incorporating both intelligibility and accountability.”

Together these made up our set of five.

It’s not just about AI ethics…

The focus in the media and in the literature has been very much on ensuring there are principles to guide the responsible use of AI. Yet what is relevant to AI should also be relevant to tech product development in general. If you want to make a difference you should be thinking ethically about all aspects of your product, not only the machine learning element.

“There is a lot to be defined and interpreted in your principles,” you say…. Yes we agree!

There is a lot of vague language used in the principles, hence the nature of having aspirational principles. We know this and recently held a team workshop to challenge ourselves to interpret them and put them into practice. From this workshop, and with some excellent insights from the social scientist who works with us, Dr Alex Mankoo, we are beginning to make some judgement calls on such things as:

Who defines benefit?

Who defines harm?

Who are the affected groups?

And many more…

Notes from workshop to principles into practice
Notes from workshop to principles into practice

This is something we are thinking about on an ongoing basis. It requires careful thought and collaboration between different disciplines to responsibly answer the above questions. We will continue to blog on our Medium channel to share our thoughts and progress.

--

--

Aoife Spengeman
Wellcome Data

UX researcher at Wellcome Trust Data Labs. Thinking about ethics in data science, human-centred design, and best UX research and design practices.