Week 13: Stepping Back

Kate Styer
Trust and Process
Published in
5 min readDec 5, 2018
Photo by Andy Kelly on Unsplash

A few weeks ago now one of my classmates joked that I should make a restorative justice chatbot. After she said it, we both laughed but then kind of paused and started thinking, wait, that’s not such a crazy idea. At a very high level, the bot would facilitate restorative circles with members of online communities, taking the place of a human moderator and solving for the problem of the time consuming nature of moderating.

My restorative justice brain nearly had an aneurism though, because from a restorative justice perspective, there are a lot of things wrong with this. Namely, restorative justice is inherently human. It requires humans interacting, in-person, showing their humanness to each other. My tech brain countered with, well, bots are inherently human too, because humans made them, and what if there’s a future where we get better at training them to be more like us? My design brain is trying to come to a compromise between the two. Maybe they all need to circle up.

Despite my restorative justice brain’s protests, I started to like the idea of a restorative justice bot more and more. It would give me an opportunity to explore machine learning while also taking a position about the current state of our digital conversational spaces and offering another option, different from the ways they have mimicked our criminal justice system, including its flaws. I was also eager to define the shape of my project, to be able to describe it in a more tangible way. I decided to move forward with option 2 from last week: one prototype, in service of my vision for a restorative justice chatbot.

As a result of taking on graduate school and full-time work, my weekends have become a time of intense internal conflict. My body wants rest; my mind does too, but it also knows the weekends offer free time to catch-up on my projects and maybe event get ahead! The latter rarely happens, but neither does the former. I usually end up somewhere in between — getting enough done and getting enough sleep to not completely implode, but still feeling like I could have have done more. For the sake of my sanity, I’ve had to let myself off the hook a little bit and let enough be enough, and then keep going forward.

In practice, this has occasionally looked like this: me panicking about my lack of progress or clarity on a Sunday afternoon and enlisting my husband’s feedback. He pulls up our living room chair with a notepad and pen in hand, facing me at my desk (which used to be our dinner table). He humors me in all of my yes-buts — all of my reasons for why something won’t work, why I’m a failure and should just drop out — and eventually he helps me get to a place where I can start to move forward. He deserves a degree in marital emotional labor when this is all said and done.

During our Sunday afternoon feedback session this week, when I was explaining the concept of a restorative justice chatbot, my husband pointed out that I still don’t know if a human can effectively moderate a restorative circle online; if we don’t know yet whether a human could do it well, how can we train a bot to do it? I had been getting ahead of myself a little bit, thinking more about the kind of data the bot would need to learn about to respond in the “right” ways. While I had run an initial experiment on Slack a few weeks before to start thinking about what the process of facilitating a restorative circle online could look like, I needed to continue refining that process.

In preparing to present to my thesis class, I defined my problem and concept as clearly as possible at this point. I also developed some diagrams explaining the context of this process: who will use it, and where in the typical enforcement action process in an online community the restorative justice process would fit. Then, I created a blueprint of the process of initiating and implementing the process of using restorative justice in an online community in response to a conflict between members.

The Challenge: How might we mitigate conflict and harassment in online communities with a restorative justice approach to community guideline enforcement actions?

The Concept: A process for using restorative justice in an online community in response to a conflict between members.

This is a high level scale showing the kinds of users found in online communities, and which users would be the best candidates for participating this process.
This is high level scale showing the kinds of interactions that take place in online communities. I wanted to point out that my concept wasn’t intended to be a response to targeted harassment, but more of a response to the kind of misunderstandings described in pink.
This is an overview of the typical stages of community guideline enforcement. The first line of defense is the established community guidelines, and then depending on the site, content is flagged automatically or users report it. Moderators review and make a decision about how to respond. My process would take place before that decision is made. The result of the restorative justice meeting between members would help determine the enforcement action.
Part 1 of the process flow, from the moment a conflict starts to when the users involved agree to participate in an RJ chat with the moderator. One of the key interventions I’m adding is in addition to flagging or reporting, there would be an option for the member reporting to indicate their willingness to participate in an RJ chat with the member they’re in conflict with or feel offended by. This reflects one of the key features of restorative justice, which is that it gives the “victim” more control of the situation and considers their feelings and needs first.
Part 2 of the process flow, showing a condensed version of the discussion flow.

Concerns
One of the logistical concerns that popped up right away was the issue of scheduling the restorative justice chat with three people or more. It’s hard enough to do this in person, and when you add on the different locations and the fact that online community participation can occur alongside in-person family and professional responsibilities, it only gets harder from there.

In general, time is a big concern and has always been since I started down this road. On large platforms and groups with volunteer moderators especially, mechanisms to auto-flag, giving members the ability to report on things, are very much part of the answer to the problem of lightening the workload of managing a group of thousands sometimes millions of people. Beyond that, moderators still have to review reports and make decisions. Many communities require multiple moderators to handle it. I don’t intend to ignore this, but I want to find a way to position my concept as an opportunity for a culture shift, maybe even an anti-product.

Next Steps
I see these assets as the beginning of the foundation for my prototype, the anchor for the process. My next steps are to review and reflect on the feedback I received from my class and our guest critic, and then move forward with developing a role play to test my process in response to a conflict.

--

--