Twist of FAT(E)*

Conversations around ethics and digital technologies are gaining steam — but can FATE avoid being overtaken by the flood of events?

Luke Stark
Berkman Klein Center Collection
5 min readApr 7, 2018

--

Listen to Dylan (credit: bmethe)

“We can train AI to identify good and evil, and then use it to teach us morality” was the title of the opinion piece in Quartz that brought me up short during this morning’s usual Twitter peruse. The text wasn’t much more encouraging: the author, a tech entrepreneur advocates for using machine learning to derive moral principles from human data, and then using those principles to judge and resolve human ethical dilemmas. “Let us assume that because morality is a derivation of humanity, a perfect moral system exists somewhere in our consciousness,” the author writes. “Deriving that perfect moral system should simply therefore be a matter of collecting and analyzing massive amounts of data on human opinions and conditions and producing the correct result.” Morality, in formulation, is just another technical problem to be solved.

A couple of months ago, a more nuanced and thoughtful set of conversations around ethics and digital systems took place at the first full FAT* Conference, held at the NYU School of Law. FAT* stands for Fairness, Accountability, and Transparency — sometimes an E for Ethics gets thrown in to the acronym too. The * signals the wide range of digital systems FATE work touches — not just machine learning, but database structures, human-computer interfaces, and the broader sociotechnical contexts in which we use these machines.

Work on FATE in digital systems has blossomed over the past few years, but as a subfield of law, science and technology studies, and media studies the area has a longer history — computer scientist Batya Friedman and philosopher Helen Nissenbaum (the latter, for full disclosure, was my dissertation advisor) co-authored an article titled “Bias in Computer Systems” way back in 1996. I’m honored to count myself among a growing group of scholars and activists doing work on FATE in sociotechnical systems, and who want to highlight the lived asymmetries of power, privilege, and discrimination digital systems structure, reinforce and create. Brilliant work presented at FAT* by, among others, Chelsea Barabas, Joy Buolamwini, Elizabeth Bender, Kristian Lum, Terrence Wilkerson, and the great Alondra Nelson exemplified the richness and urgency of FATE as a field, and its centrality to our broader politics.

While I agree with my BKC colleague Ben Green in cautioning new FAT(E) enthusiasts to be aware, “mathematical specifications of fairness do not guarantee socially just outcomes,” I’m happy that at least a few computer scientists are engaged in such relatively subtle conversations. This morning’s Quartz opinion piece is a reminder many in the tech sector still have so vague an understanding of the relationships between empirical data, social processes, and values like justice, fairness, and accountability that using machine learning to “derive the perfect moral system” doesn’t appear to be what it is: ridiculous on its face.

And as wrong-headed as the Quartz piece might be (and as such, an easy target for critique), it points to a real problem: what we in the BKC Ethical Tech working group describe as “ethics-washing.”

Don’t do it (credit: Pavel Constantin)

Greenwashing, or marketing strategies aimed at convincing consumers a company’s product or service is more environmentally friendly or sustainable than it really is, was apocryphally coined in 1986 by New York environmentalist Jay Westerveld. Ethics-washing (ethoswashing?) doesn’t trip as lightly off the tongue — but like a greenwash, an ethics-wash (ethoswash?) is simply a particular specimen of a whitewash (“deliberately concealing unpleasant facts about a person or organization”). The latter two terms seem almost entirely synonymous: platforms like Facebook seek to propitiate angry users and regulators with carefully worded promises while doing all they can to maintain existing business models. Isn’t that something of a whitewash?

In the early 1980s, as I’ve described recently in Slate , computer scientists interested in how humans engaged with new innovation of the personal computer proposed applying cognitive psychology to modeling the optimum parameters for those interactions. One point I touched there briefly — but which I want to amplify strongly here — was the explicit rejection of potential interdisciplinary collaborations between psychologists and systems engineers by those early HCI practitioners. In effect, HCI sought to make its disciplinary frame of reference a computational one, not a psychological one — Stuart Card, one of the authors of the seminal The Psychology of Human-Computer Interaction, later told colleague Jonathan Grudin he had “personally changed the (CHI conference) call in 1986, so as to emphasize computer science and reduce the emphasis on cognitive science.”

I don’t think this history suggests a “whitewash” (or a “brain-wash?) per se, but it’s exemplary of computer science seeking to define terms and concepts new to the field at the time — like “human” and “interaction” — in ways ensuring the technical expertise and assumptions of computing as a technical discipline — a “science” — remained both paramount and sufficient in of themselves.

It’s my sense something similar is happening to “ethics” now as happened to “interaction” in the 1980s. Computer scientists and SV entrepreneurs (consciously or otherwise) are working to formalize the definition of “ethics” to conform to the discipline’s key tenets, including around technical specifications as chief measures for evaluating social goods.

In the BKC Ethical Tech Working Group, we’ve had many conversations around the pros and cons of using the term “ethical” to describe our focus — both understanding the word’s current vogue gives us an opening as critical scholars to engage in broader conversations across the tech sector, and recognizing the contradictions and pitfalls it brings, not least because of the term’s propensity for cooptation and use in “ethics-washing.” Throwing terms like “ethics” around loosely is worse than useless if “ethical tech” becomes not a broad focus on FATE in digital media’s many sociotechnical contexts, but a narrow set of minor technical tweaks in some systems coupled with a renewed belief in the objectivity of “ethical” algorithms.

We need much more FATE in our lives, work, and public discourse — and we need to push for true multidisciplinary collaboration as one of FATE scholarship’s core tenets. Otherwise we run the risk of FATE, as the BBC often observes, being “overtaken by events.”

--

--