How we embed ethical thinking in our product team: A framework

Aoife Spengeman
Wellcome Data
Published in
5 min readAug 9, 2019

When Wellcome Data Labs started thinking about ethical data science product development, like others in my team, I had lots of questions about the algorithm we are building. Where did the training data come from? Could the data be biased? Are the outputs biased? Are the trade-offs in the algorithm fair? But we soon realised that there was a lot more to question than the workings of the algorithm.

As a team, we have come to understand that being aware of the embedded fairness in an algorithm is just one step in thinking about tech products ethically. Thinking about responsible product and design decisions are also critically important.

However, I found it difficult to know where to begin and how different efforts link up towards the same goal. Trying to figure out the ‘right way’ to build a data science product has felt at times overwhelming and confusing.

So, to try to make sense of all the interesting activities, debates, and discussions happening in the team, I decided to draw a model:

handwritten poster with early outline of framework.
Early version our framework of thinking for ethical product development

With the help of a friendly designer and input from the team, the above squiggles and poorly drawn shapes developed into the below diagram, and this became the basis for how our team tries to think about ethical and responsible efforts in product development.

A designed diagram with each of part of the framework outlined, showing the relationship between each part

This framework of thinking has helped the team think about questions like:

  • What are the trade-offs in how the algorithm is designed and what impact will that have on our users and wider society?
  • How are we going to detect ethical concerns as they unexpectedly emerge with the product use?
  • Is the training data unfairly biased and how does this affect the rest of the product?

Why irresponsible tech products exists

In the minority of cases, tech product creators know what they are building is harmful to people. There are some regulatory mechanisms in place to protect people from these harms, such as GDPR and the UK Government’s Online Harms White Paper. In other cases, such with dark patterns, (e.g. addiction feedback loops, (overly) persuasive design, and manipulating defaults) there is little control or protection.

It is more frequently the case that harmful products are created without any intention of harm by its creators. We believe this happens for mainly the following reasons:

1Lack of thinking space: As the focus for many tech companies remains to be early release of competitive products, there is typically a lack of space to consider unintended consequences in the product development cycle. This is very much related to the norms of the industry. Not only is it unusual to dedicate time to ethical thinking, but it is not socially promoted either — many of us are aware of the sheer power of social norms.

2 Cognitive and social biases: Generally humans are not very good at predicting what will go wrong if it hasn’t happened before. A range of cognitive and social bias can account for this, including implicit bias (e.g. prejudice) where we might overlook the importance of an issue, or availability bias that ensures that what predict as likely to happen is based on our past experiences.

The ‘Source of the Problem’ part of the diagram

We are also likely to have a preference for maintaining the current ways of doing things, known as status quo bias; this makes us unlikely to start thinking about negative unintended consequences when it is not the norm in product development.

In other cases, we may be over-confident about the positive effect of our product (optimism bias), and so find it difficult to see how it could go wrong.

Take the example of the 2010 Deepwater Horizon oil rig disaster, where 200 million gallons of crude oil gushed into the Gulf of Mexico as a result of its management’s refusal to accept the evidence of risks. In short, as everyone knows, humans are not always rational and that is something that each of us have to continue to be aware of.

Think about data science and product development both separately and together

You will see how this diagram separates algorithmic development from product development. We found this was an important distinction; data science sometimes get shrouded with ideas that the negative impact of a product is caused by factors that happen as a result of the algorithm. While this is sometimes the case, there are so many examples of irresponsible product design, regardless of the technology behind it, that results in harm.

The ‘Team activities’ part of the diagram

“Executives who urge their programmers to move fast and break things clearly expect someone else to pick up the pieces.” — Lizzie O’Shea

At the same time there is a lot of exploration and inspection of algorithms that only data scientists can do. While outsiders can raise questions, the only people who can assess the algorithm are the data scientists. That is why our data scientists are currently reviewing the algorithm for inaccuracies across different groups to check for unfair bias, as well as analysis of trade-offs between false positive and false negatives.

None of this will work if we don’t have….

Embedded awareness in the team: We need whole team involvement and active thinking to make proactive ethical thinking a reality.

Agreed values: To avoid endless debates about ‘what the right thing to do is’, we need some established ethical values and principles to guide us in the same direction.

Engagement with internal and external communities: We cannot undergo this process in isolation of others who are interested in tech/data/machine learning ethics.

The ‘underpinning efforts’ part of the diagram

So where does this leave us?

This diagram is a guide for thinking about next steps, such as algorithmic reviews and agreeing team ethical values. We cannot feasibly achieve every aspect of it, but in each next step we are clearer as to how it links up to the wider picture. Undoubtedly it will evolve along with the team’s learnings and mindsets as we continue to experiment with embedding ethics into product team processes.

If you have feedback on this approach or if it got you thinking at all about ethical product development, we would love to hear from you. Post your comments here or drop me a line at a.spengeman@wellcome.ac.uk

--

--

Aoife Spengeman
Wellcome Data

UX researcher at Wellcome Trust Data Labs. Thinking about ethics in data science, human-centred design, and best UX research and design practices.