Thoughts on the FDG ’17 Keynote Controversy

Last month, I attended the Foundations of Digital Games (FDG) conference for the first time. It’s a conference that I’ve wanted to attend for some time, since many scholars that I deeply respect have been involved with it over the years, but this was the first year that the timing, location, and nature of the work I’m currently doing aligned to make it work.

FDG started in 2006 as a small event called “Microsoft Academic Days on Game Development in Computer Science Education. In 2008 Microsoft transferred responsibility for the conference to a new non-profit that they helped establish, the Society for the Advancement of the Study of Digital Games, and beginning in 2009, the conference was renamed “Foundations of Digital Games.” Over the past eight years, FDG has grown beyond its original focus on games education, and has become one of the few academic conferences that welcomes contributions from a wide range of games-related scholarship — from deeply technical work on AI and graphics through social science and humanities-focused analysis of the medium.

Overall, I was delighted (but not surprised) by the quality of the content at this year’s conference. From the excellent keynotes by Raph Koster and Constance Steinkuehler to the high-quality papers and thought-provoking panels, I learned a great deal, and was able to talk with a wide range of both established and emerging scholars.

I left the conference early on Thursday morning, and as a result I missed the closing keynote by Adrian Cheok (“Love, Sex, and Robots”). It didn’t take long, however, before I began to see very troubling commentary about the talk from people I respect :

T. L. Taylor, who was at the talk, has written up some of her concerns in this Facebook post:

Taylor focused primarily on the fact that Cheok first appeared to ignore, and then explicitly dismissed, the complex ethical issue of consent — not just in the context of human-robot relations, but specifically in the context of working with students. Those are very troubling concerns.

As I’ve spoken to colleagues who were at the talk, however, I’ve discovered that there were other aspects of his talk that were equally troubling. Cheok’s argument in favor of laws allowing human-robot marriage appeared to be based on his belief that there is no meaningful difference between interracial or same-sex marriage and marriage between a person and a robot. This equating of marginalized populations with non-humans is not simply controversial—it is deeply offensive. There might perhaps have been a possible path for that argument if the robots in question were sentient and independent, but Cheok’s slides made it clear that the type of robot spouse he imagined was one that was both compliant and obedient.

Slide from Adrian Cheok’s keynote presentation at FDG ‘17

The implicit sexism in Cheok’s characterization was also an issue. Critiques of the sexist assumptions underlying the use of female voices have been around for quite some time, and those criticisms have ramped up in the age of Siri, Alexa, and Cortana. This isn’t limited to obscure gender studies articles, either — even a cursory online search yields a significant number of relevant recent articles on the topic, from mainstream sources including The Atlantic, Wired, and even Financial Times.

Cheok’s topic of “love and sex with robots” was hardly a new one, and his talk did not appear to introduce any particularly challenging new ideas or concepts — instead, as Taylor’s critique pointed out, it was essentially a rehashing of ideas that have been around for over a decade. David Levy’s 2008 book Love and Sex and Robots, for instance, made arguments quite similar to Cheok’s, and received a significant amount of attention in the popular press. Even Cheok’s claim that we’ll be marrying robots by 2050 is essentially an echo of Levy’s assertions.

The issues surrounding personhood, sentience, and consent in the context of robots and AIs are certainly relevant to the work of FDG attendees. And there is absolutely a case to be made for challenging conference attendees with provocative viewpoints. At an academic conference, however, for a talk to be usefully provocative, it needs to offer new ideas or arguments, not uncritically recycle old ones. Restating insulting arguments that diminish the personhood of marginalized populations, while also ignoring or dismissing the most challenging ethical issues facing researchers in your field, is not intellectually provocative. Attendees were upset and angry not because their ideas about the field were being challenged, but because the arguments were both weak and offensive.

In the weeks since the conference, Cheok’s outrageously unprofessional and inappropriate responses on Twitter to legitimate criticisms of his talk have diverted attention away from the problematic nature of the talk itself. In particular, his attacks on Gillian Smith, an assistant professor and respected scholar in games AI, rose to a level of bullying and incivility that I have never before witnessed from a senior academic.

I was happy to see the FDG community, and scholars more widely, come to Smith’s defense. And I was also pleased that SASDG posted a public response on its website. But while the SASDG response flatly — and appropriately — condemns Cheok’s post-conference behavior, it also dismisses any criticism of the talk’s content as simply “conflicting interpretations.”

In the days since the conference we have received further information and feedback from FDG attendees about Prof. Cheok’s keynote and its contents. Attendees have interpreted his research and its methods in many different ways, often at odds with one another.

This response fails to acknowledge that a significant number of conference attendees found the talk to be both intellectually weak and deeply offensive. In fact, I’ve seen no public defenses of the content of Cheok’s talk from any conference attendees. The criticisms posted online were not met with intellectual debate from attendees who had different interpretations — only with personal attacks from Cheok and one of his students.

As someone who has organized more than her share of conferences, I know that the process of finding keynote speakers and ensuring that their talks are appropriate to the audience is not an easy one. This is one reason why some have argued that keynotes at academic conferences should be abolished. If organizers choose to invite a keynote speaker, however, I believe that they have an obligation to try to ensure that the talk is one that will be of value to the attendees. That means doing due diligence on not just on the speaker’s past presentations, but also on their current research. (In fact, that’s why I was surprised to see that FDG had scheduled four keynotes over four days, representing a non-trivial expense for a small conference with limited funding, as well as an enormous amount of work on the part of the organizers, and a greatly increased chance of a disappointing talk.)

It’s not clear to me whether the conference organizers performed that due diligence when inviting Cheok. I’ve found little evidence in his recent body of work that he is adding anything of value to the many important (and yes, often provocative) questions surrounding robots, sentience, emotions, and intimacy; at the same time, I’ve seen much in his behavior on social media, even before his post-conference tantrum, that gives me pause. However, it’s absolutely the case that his early work on pervasive computing was well-received and influential, and that his current projects have the potential to spur interesting conversations about technology and intimacy. And realistically, poor quality or poorly-targeted talks can happen even when organizers do everything right. Had it simply been that Cheok’s talk was intellectually shallow, or that Cheok had been personally offensive in his post-conference communications, I probably would not see that as a failure of the organizers.

However, in addition to being weak in its intellectual content, Cheok’s talk offended and alienated a significant number of FDG conference attendees. In the aftermath of the conference, I’ve heard from a number of junior scholars — particularly women and LGBTQ — that they are reconsidering their participation in FDG as a result of that talk. They feel that a conference willing to offer an uncritical platform for this type of poorly-researched, ethically problematic, and socially offensive content is not one where they feel welcome. And while many of these junior scholars are not comfortable speaking up publicly (even more so as a result of Cheok’s online behavior), I know for a fact that these concerns were communicated privately to the SASDG board. The failure of the SASDG statement to acknowledge this as anything more than an intellectual disagreement over content is likely to further alienate those scholars, and to undermine the community that FDG has built up over the years. There’s no indication that the FDG organizers or the SASDG board recognize that real harm was done here, or that any work needs to be done to repair the resulting damage to the community.

Our field struggles mightily to recruit and retain scholars from marginalized and underrepresented populations. It’s deeply disappointing to me that FDG and SASDG are not willing to acknowledge and take a share of responsibility for the way the closing keynote hurt and alienated many of those scholars.