#ethicalCS: Thinking about Bias in Computing classes

Saber Khan
Mar 23, 2018 · 3 min read

This is part of a series of documents meant to support a discussion and investigation of ethics and morals in relation to the impact of computer science on the world at-large. You can find the rest of series here:

Introduction to #ethicalCS, Abstraction, Algorithm, Programming Data, Networks Equity and Access Design, UX/UI Teaching, Learning Goals (coming), Algorithmic Discrimination (coming),

Introduction: In this document we will engage with the concept of ethics of bias in Computer Science education. Here is a helpful way to think about bias in computing:

…three categories of bias in computer systems have been developed: preexisting, technical, and emergent. Preexisting bias has its roots in social institutions, practices, and attitudes. Technical bias arises from technical constraints or considerations. Emergent bias arises in a context of use. Although others have pointed to bias in particular computer systems and have noted the general problem, we know of no comparable work that examines this phenomenon comprehensively and which offers a framework for understanding and remedying it. We conclude by suggesting that freedom from bias should be counted among the select set of criteria — including reliability, accuracy, and efficiency — according to which the quality of systems in use in society should be judged.

(Bias in Computing Systems, Freidman & Nissenbaum, 1996).

This document is generated from the #ethicalCS Twitter chat. You can find the highlights from the chat on bias here with Camille Eddy, Eric Meyer, Sara Wachter-Boettcher, and others.

Questions:

  1. How is bias in real-life related to bias in computing? How does bias interact with technology to impact people?
  2. How should we help students understand how bias function in computing and the impact it has on marginalized peoples?
  3. How should we see and mitigate the harm that bias generated with data-driven technologies?

Ideas:

  1. Data and data collection is never neutral and can easily be manipulated to confirm a bias.
  2. While algorithms and bias affect our lives it can be hard to figure out how the hidden systems make decisions.
  3. Agreed upon and documented definition of fairness along with deep understanding of bias can help us build better algorithms.
  4. Bias in computing is a function of bias in society, but amplified by algorithms that are scalable and portable.
  5. The technology industry has under-invested in understanding and mitigating bias.
  6. Since the algorithms of technology companies are protected from scrutiny by intellectual property protection, it can be hard for the public to see how bias functions.
  7. Developing standards, practices, and codes can help practitioners use best practices to mitigate bias.
  8. Personal bias can develop through ignorance so seek challenging narratives outside our experience.
  9. Technologies that are data-driven like artificial intelligence, predictive policing, etc. are especially susceptible to bias, but also to being inscrutable.
  10. As more industries and sectors automate the effect of machine bias will spread to many areas of life.
  11. Good models have feedback loops that challenge algorithms and seek to debias data.

Resources:

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade