Why Don’t AI Coders Study AI Ethics?

Nick Doiron
The Startup
Published in
5 min readSep 26, 2020

When AI systems are launched and when they break, especially when they fail in loud and embarrassing ways, experts in AI ethics appear in the press. Computer Science professors share their updated curricula and favorite books critical of relying on algorithms. We hear about the societal affects of AI, brought about by willful ignorance of ‘techies’ or ‘tech bros’. So I thought about, what keeps AI coders so distant from the ethics field?

‘State of the Art’ at all costs

There’s a Hacker News comment which I’ve kept bookmarked since January, which is the peak of AI pushback:

I am worried about the recent trend of “ethical AI”, “interpretable models” etc. IMO it attracts people that can’t come with SOTA [State of the Art] advances in real problems and its their “easier, vague target” to hit and finish their PhDs while getting published in top journals.
Those same people will likely at some point call for a strict regulation of AI using their underwhelming models to keep their advantage,
faking results of their interpretable models, then acting as arbiters and judges of the work of others, preventing future advancements of the field.
https://news.ycombinator.com/item?id=21959105

Let’s not stray too far into Ethicists as True Villains theory. I wanted to explain this thinking that the ethics field doesn’t follow the rules and currency of the AI field. Commercial AI projects have so much hype, that researchers’ conversation revolves around metrics. If someone promotes a new approach but can’t point to a metric to prove a ‘State of the Art’ achievement, their results are not valuable to the author of this post. The ethicist is cast to the familiar role of someone who is using hype, or not technical enough.

This attitude is already harmful from a pure research perspective, because papers which show negative results, survey papers, and new metrics/benchmarks should also be published.

The comment is also wrong because just as AI/ML developed quantitative measurements for intelligence and recognition tasks, competitive metrics have been developed in the explainability and fairness fields.

The comment ignores the field of machine learning privacy / federated learning / differential privacy, which sacrifice some accuracy to keep private training data from leaking out of a system. These protections could be implemented voluntarily, to build trust, but they are sometimes legally mandated, such as in private medical data. It isn’t so novel then, to consider legal arguments to influence how we develop fair or transparent ML.

Career Trajectories and Achievement

I recently asked a student if his computer science courses now include ethics, and he explained that there’s one required course on ‘social issues’ in computer science. He noted that the professor came from a humanities field, and he heard that the professor didn’t know enough about computing to make the course especially interesting and relevant.

Before taking this at face value — consider technical gatekeeping, and the university’s limited resources for a new ethics/STS requirement. Consider that specializing in AI ethics may be a more lucrative decision for a talented, empathetic humanities professor than it would be for a talented, empathetic CS professional. Careerism in academia is complex, but we should acknowledge that it influences who is teaching and defining the movement for CS students.

Outside of academia, what is the career trajectory of an AI Ethics professional? We see freelance journalists, professors on book tours, and a select few ethicists at big tech companies. Is Google always interviewing ethicists in the same way it always interviews coders? Is a leader in the field someone at a conference keynote or someone who experienced AI discrimination? These uncertain norms makes many ‘techies’, either in their academic or early professional career, uncomfortable with making ethics closer to the core of their career and message.

Rights vs. the political correctness narrative

If you are very online, then you have seen the Joe Rogans, engineering vloggers (such as former manager Patrick Shyu), and Twitter-ers of the world, diverting rights issues into a conversation about political correctness. Like the rest of us, many experts in machine learning are hearing this in their earbuds while working. They see diversity and inclusion being mentioned in meetings and press releases, without real follow-through in their office.
These narratives leads even politically left ‘techies’ into adopting conservative views. And if you think political correctness is out of control and #ShutDownSTEM is a Maoist mantra, you are going to have a kneejerk reaction to AI ethics. Case in point, this tweet by Ryan Saavedra from The Daily Wire:

Not only can algorithmic bias have racist outcomes, it is a predictable consequence of any algorithm focused on the majority of its training samples.

Explainability is also more than appearances. One of the cornerstones of democracy and justice is that the system does not act on a whim; you cannot be arrested without charges. When I sat with lawyers in an AI and International Law mini-course in The Hague, I saw how this was a legal crowbar to query any system. Client turned away by immigration? Let’s look at the code. Government chose one contractor’s bid over another? Subpoena. If your AI system can only say ‘because I said so’ or — worse — can be proven to change outputs with one different word or name or zipcode, then it cannot withstand the consequences of making big decisions.

I do have some questions about explainability — for example, if an interpretable model is only understood by mathematicians and lawyers, how transparent is it, and should it replace a blackbox model known to have higher overall accuracy? When AI systems can be frozen and tested in a deterministic way, are they being held to higher standards than a human? But that’s for another article.

Some resources

Recent Articles on AI ethics

(the first title is misleading — I think these successive waves of thinking on ethical AI are all still active and contribute)

Conferences

The leading conference might be the ACM Conference on Fairness, Accountability, and Transparency (FAccT). (talks on YouTube)

NeurIPS has taken steps by including diversity groups, and requiring a social impacts section on every paper, which other conferences might copy:

On GitHub

LinkedIn has a new library to measure fairness, and does an especially good job of explaining fairness in datasets and in ML models:

Google and Microsoft projects on explainability:

--

--