Humans and AI: Everything in Moderation

Mitu Khandaker
Spirit AI
Published in
3 min readJan 23, 2018

Mashable posted an opinion piece which summarises some of the recent moves by large tech companies and online platforms to do something about the problem of harassment and toxic content; in short, at long last, we are acknowledging that there is a problem. And, crucially, that something needs to be done about it.

The central premise of the article is to problematize the current trend in approaches being taken — which is to staff up with ever-larger teams of human moderators. While the author rightly touches upon the very real cost in emotional labour that this approach necessitates, he goes on to say:

… if it were as simple as enforcing YouTube’s community guidelines, a bot could do it, and we already know that doesn’t work. With humans involved, it raises a different set of questions: Who are these humans? What qualifications or biases do they have? And what exactly raises a red flag in their minds?

While this raises worthy questions about transparency and openness, we of course also know that technology is not neutral either, and doesn’t give us unbiased results.

There’s a certain framing of this which is negative; this bias, unassumed and unchecked is what is problematic. However, if we work towards a wider understanding that all design is intentional, and that a computational system is so because a human designed it to be that way, then that is the goal.

After all, we know that technology is not neutral because at Spirit, we design and build these systems— our tools are a product of our thinking, our experiences, and crucially: our subject matter expertise. That is a good thing.

Community moderators are subject matter experts too — and this is why, as the article suggests, traditional “bots” do not work well. They don’t account for the nuance and deep understanding of language and of how communities work. It takes that kind of understanding and expertise — which might be called ‘bias’ — built into the design of a product, to capture those nuances. (Full disclosure: that is how we have approached building Ally).

In any case, it is the collaboration between a human moderator and technology that yields the best results here; tools through which we can outsource a part of our emotional labour, provide the necessary support for humans to continue to do their jobs well. Machines can collate relevant information quickly and effectively for a human to review; saving them trouble and time, allowing them to respond quickly to even more situations.

This question of offloading some of our emotional labour to the machine is a fascinating one; it’s also one about which Leigh Alexander wrote a thoughtful piece last year, in which we were featured:

Solving toxicity in online platforms is going to require a combination of approaches; we’ve created a messy problem, and the solution requires the same degree of nuance. One thing is clear, however: this is the time for something to be done. As Ellen Pao, former CEO at Reddit noted:

Spirit AI builds tools to make the future of digital interactions better: both with virtual humans, and real humans. We make Character Engine, for authoring dynamic improvisational AI characters, and Ally, a tool for detecting and intervening in the social landscape of online communities — to curtail online harassment, or to promote positive behaviour.

--

--

Mitu Khandaker
Spirit AI

🇧🇩 Brit in NYC • ✨ CEO, Glow Up Games • 🕹 Game Designer & Engineer • 👩🏾‍🏫 Professor NYU Game Center • 👩🏾‍🎓 Dr of VR • 👸🏾 Mad ethnic right now.