What do we teach when we teach tech & AI ethics?

Casey Fiesler
Jan 17, 2020 · 6 min read
Image: justice & code (source: iStockPhoto, licensed)

A little over two years ago, I crowdsourced a collection of syllabi for tech ethic courses. Today, there are over 250 courses listed, representing a variety of universities and disciplines. Over time I’ve heard from a lot of people about how useful they’ve found it to poke around and see what other people are teaching, and to get ideas. With this in mind, my students and I decided to conduct some research to uncover patterns. Ethics in the context of technology (particularly emerging areas like data science and AI) is such an important topic, and curriculum coverage is increasing at universities in computer science and beyond. So our question was: what are we teaching when we teach ethics — both generally, and specifically for AI? This post covers two forthcoming papers that answer this question — one for SIGCSE (in Portland in March) and one for AIES (in New York in February).

First, we used the crowdsourced syllabi collection to analyze topics (as represented by schedules or reading lists) and course objectives for primarily general, standalone tech ethics courses. We qualitatively analyzed a total of 115 syllabi that were publicly available. (As you may know, I care a lot about the ethics of research using public data; there are details in the paper about the steps we took to construct the dataset, including contacting instructors who had added their syllabi to the spreadsheet.) The outcome of this analysis is an accounting for what self-described “tech ethics” courses include for both content and learning outcomes, along with what departments and by whom tech ethics classes are being taught. The tables below show frequencies for content (e.g., law & policy, privacy & surveillance, AI & algorithms), and learning outcomes (e.g., critique, spot issues, make arguments).

Side-by-side tables that show (1) list of ethics topics with frequencies; and (2) list of course objectives with frequencies
Side-by-side tables that show (1) list of ethics topics with frequencies; and (2) list of course objectives with frequencies
Tables with data about topic and outcome frequencies, from Fiesler et al. SIGCSE 2020

There is a lot more detail and analysis in the paper (“What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis” by myself, my PhD student advisee Natalie Garrett, and University of Maryland PhD student (and former CU student) Nathan Beard), but here are some general takeaways: (1) There is a ton of variability across courses, which suggests that there is a lot that instructors could learn from each other; (2) The popularity of certain topics suggest what some “hot” or important topics in tech ethics are right now; (3) The goals of these courses are more about teaching conceptual skills rather than specific knowledge (e.g., less “recite the categorical imperative” or “tell me all the parts of GDPR” and more “can you critique technology, spot ethical issues in the real world, and make a sound argument”); and (4) There are many topics taught in these standalone classes that could and should also be integrated into technical classes. We use this analysis to continue to argue for ethics integration across the computing curriculum.

Because our dataset for this study was so broad, our analysis of individual topics was necessarily shallow; additionally, we were only able to examine standalone ethics classes, without insights into what ethics topics might actually be covered in technical classes. In considering these gaps, my PhD advisee Natalie Garrett was interested in delving deeper into a specific area that is particularly important for ethics education — artificial intelligence. She led additional analysis that resulted in a second paper (“More Than ‘If Time Allows’: The Role of Ethics in AI Education” by Natalie, Nathan, and myself) that covers both standalone AI ethics classes and technical AI classes.

For this study, we needed information from technical classes as well. In 2018, I led an analysis of publicly available syllabi and course descriptions from 186 machine learning and AI courses at 20 U.S. universities; this analysis was part of a paper published in Transactions on Computing Education last year (“Integrating Ethics Within Machine Learning Courses” from Jeffrey Saltz, Robert Heckman, and Neil Dewar at Syracuse University; Michael Skirpan at CMU (formerly a CU student); myself, Nathan, and Tom Yeh at CU; and Micha Gorelick at Probable Models). The analysis was a simple binary coding; are there any ethics-related topics (with a very generous definition of ethics-related) noted as part of this class? As shown by the table below, the appearance of ethics in technical AI/ML courses is quite rare (about 12%).

Table with data about frequencies of AI/ML courses with ethics-related content, from Saltz et al. 2019

We combined subsets of the data from both of these previous studies to create a new dataset that included: (1) standalone AI ethics courses (31 from 22 universities); and (2) technical AI classes that include ethics (20 from 12 universities). We conducted a similar analysis with the SIGCSE paper, to create categories and descriptions of the topics being taught for both. We additionally analyzed (when available) the readings covered in these classes, with special attention to news articles that suggest current events that are being used to illustrate concepts. For example, the most frequent topic in AI ethics classes is bias, and common news items included the COMPAS recidivism algorithm, the use of facial analysis to predict sexuality, and the case of Google photos mislabeling an African American woman with “gorilla.”

For the technical AI/ML classes (which should be applauded for including ethics content at all, since this was rare!), the most common topics were bias, fairness, and privacy — often (but not always) primarily as technical constructs. In this great paper from Andrew Selbst et al., they note that researchers and practitioners in the field of ML fairness tend to abstract away the social context in which these systems are deployed, focusing instead on the model, the inputs, and the outputs. However, understanding social context is critical, particularly given the contexts in which AI is often deployed — e.g., the justice system, military, health, are all contexts that appeared frequently in the standalone AI ethics courses we analyzed. Additionally, we made an observation with respect to timing; with a couple of exceptions, ethics topics were all covered at the very end of technical classes (including one where “ethics” as a topic was just listed as “if time allows”). This is equivalent to a common trend for standalone ethics classes to be at the end of a CS degree. However, we argue that ethics should be part of the curriculum early and often — which is why Natalie and I are also doing some work now as part of the Responsible Computer Science Challenge to integrate ethics into intro programming classes at CU.

Natalie will be presenting this work at AIES next month, and then we will both be at SIGCSE in March. We really hope that this work can serve as a call to action that can encourage and assist instructors who are interested in including ethics as part of a class, as well as computing programs who have the goal of increasing the reach of ethics across a curriculum. After all, the students that we teach to code today will be the ones working at all levels in the tech companies that might — or hopefully might not — be involved in the ethics scandals of tomorrow.


Fiesler, Casey, Natalie Garrett, and Nathan Beard. “What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis.” In Proceedings of the ACM Symposium on Computer Science Education (SIGCSE). 2020.

Garrett, Natalie, Nathan Beard, and Casey Fiesler. “More Than ‘If Time Allows’: The Role of Ethics in AI Education.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES). 2020.

Saltz, Jeffrey, Michael Skirpan, Casey Fiesler, Micha Gorelick, Tom Yeh, Robert Heckman, Neil Dewar, and Nathan Beard. “Integrating ethics within machine learning courses.ACM Transactions on Computing Education (TOCE) 19, no. 4 (2019): 1–26.


research and musings from the Information Science department at CU Boulder


The Information Science department at CU Boulder is powered by interdisciplinary research and teaching that allows us not only to imagine what today’s technology makes possible, but to invent what society will do with technology next.

Casey Fiesler

Written by

Faculty in Information Science at CU Boulder. Social computing, copyright, ethics, women in tech, fan communities, geekery. www.caseyfiesler.com


The Information Science department at CU Boulder is powered by interdisciplinary research and teaching that allows us not only to imagine what today’s technology makes possible, but to invent what society will do with technology next.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store