This is part of a series of documents meant to support a discussion and investigation of ethics and morals in relation to the impact of computer science on the world at-large. You can find the rest of series here:
Introduction: In this document we will engage with the concept of ethics of bias in Computer Science education. Here is a helpful way to think about bias in computing:
…three categories of bias in computer systems have been developed: preexisting, technical, and emergent. Preexisting bias has its roots in social institutions, practices, and attitudes. Technical bias arises from technical constraints or considerations. Emergent bias arises in a context of use. Although others have pointed to bias in particular computer systems and have noted the general problem, we know of no comparable work that examines this phenomenon comprehensively and which offers a framework for understanding and remedying it. We conclude by suggesting that freedom from bias should be counted among the select set of criteria — including reliability, accuracy, and efficiency — according to which the quality of systems in use in society should be judged.
- How is bias in real-life related to bias in computing? How does bias interact with technology to impact people?
- How should we help students understand how bias function in computing and the impact it has on marginalized peoples?
- How should we see and mitigate the harm that bias generated with data-driven technologies?
- Data and data collection is never neutral and can easily be manipulated to confirm a bias.
- While algorithms and bias affect our lives it can be hard to figure out how the hidden systems make decisions.
- Agreed upon and documented definition of fairness along with deep understanding of bias can help us build better algorithms.
- Bias in computing is a function of bias in society, but amplified by algorithms that are scalable and portable.
- The technology industry has under-invested in understanding and mitigating bias.
- Since the algorithms of technology companies are protected from scrutiny by intellectual property protection, it can be hard for the public to see how bias functions.
- Developing standards, practices, and codes can help practitioners use best practices to mitigate bias.
- Personal bias can develop through ignorance so seek challenging narratives outside our experience.
- Technologies that are data-driven like artificial intelligence, predictive policing, etc. are especially susceptible to bias, but also to being inscrutable.
- As more industries and sectors automate the effect of machine bias will spread to many areas of life.
- Good models have feedback loops that challenge algorithms and seek to debias data.
- Design for Real Life (book) by Eric Meyer & Sara Wachter-Boettcher
- Technically Wrong (book) by Sara Wachter-Boettcher
- How algorithms are pushing the tech giants into the danger zone (Guardian UK) by Sara Wachter-Boettcher
- The Tech Industry Is Clueless About People. Let’s Debug It (Mother Jones) by Kanyakrit Vongkiatkajorn
- Facebook treats its ethical failures like software bugs, and that’s why they keep happening (Quartz) by Sara Wachter-Boettcher
- Change Catalyst: Powering Inclusive Innovation
- Stock Photos of People of Color (POCIT) by Camille Eddy
- Recognizing Cultural Bias in AI (POCIT) by Camille Eddy
- Parable of the Polygons by Vi Hart and Nicky Case
- Biased Algorithms Are Everywhere and No One Seems to Care (MIT Technology Review) by Will Knight
- Weapons of Math Destruction (book) by Cathy O’Neil
- Algorithms of Oppression by Safiya Noble
- From Tech Blogger to Fog Creek CEO by Code Newbie