How to Think About Use-Case Ethical Risks

Reid Blackman, Ph.D.
Product AI
Published in
3 min readNov 9, 2021

When we talk about AI ethics in product development, we often talk about biased or discriminatory outputs, unexplainable outputs, and privacy violations. That’s for good reason: they really are big risks and they result from the way AI — or more specifically, ML (machine learning) works. They are not, however, the only ethical risks we need to think about, and that’s because the use-case for the product plays a large role in what the ethical impacts might be.

Think about, for instance, the ethical risks of self-driving cars. The primary risks that come to mind there are not bias, explainability, and privacy, violations, but rather killing and maiming. Facial recognition software poses problems relating to discriminatory outputs (e.g. failing to recognize black women), but let’s not forget about the government, corporate, and even private citizen surveillance it enables. Consumer drones + facial recognition software? That raises a host of ethical issues. And to mention a few more, products that help companies engage in dynamic pricing threaten the perceived and actual trustworthiness of a brand, products that “nudge” consumers can cross a line into outright manipulation, and products that create newsfeeds can unduly influence people’s access to news and information.

The moral of the story is that product teams to vet for a wide array of ethical risks that can arise on a use-case basis. There are a number of ways to do this, e.g. checklists/questionnaires, consulting ethicists, engaging in pre-mortems, red-teaming, and so on. That said, at least one such method should include two features.

First, it should be comprehensive, in the sense that it asks teams to consider all possible ethical risks, not just the big three of bias, explainability, and privacy. In my own work, I use the following categories:

1. Physical harm (e.g. death, injury)

2. Mental harm (e.g. addiction, anxiety, depression)

3. Autonomy (e.g. violations of privacy)

4. Trustworthiness and respect (e.g. failing to provide necessary explanations, taking the well-being of users/consumers/citizens seriously)

5. Relationships and Social Cohesion (e.g. sowing social distrust, polarizing populations)

6. Social Justice and Fairness (e.g. discriminatory outputs, violating human rights, wealth inequality)

7. Unintended Consequences (e.g. those arising from both true positives/negative and false positives/negatives, malicious actors)

Second, it should ask your team to identify relevant stakeholders, which not only means the users of your product, but also those that are impacted by people using it.

Finally, putting these two together, your team should ask how each of those stakeholders may be at risk in each of the 7 categories listed above. Then, of course, you need to think about which features of your product create that risk and how you can alter it to mitigate those risks.

Assessing a product — be it at the conceptual phase or testing stage — for its potential negative ethical impacts is difficult. But with a little systematicity and a bunch of practice, it gets easier and, dare I say, enjoyable.

--

--

Reid Blackman, Ph.D.
Product AI

Philosophy professor turned (business+tech) ethics consultant