Keeping Everyone in the Picture: How Chatbots Can Help Mediate Content Sharing Conflict

Kavous Salehzadeh Niksirat
ACM CSCW
Published in
5 min readSep 20, 2023
MediationBot interface showing a chat conversation between a content uploader and a data subject, symbolizing the resolution of privacy conflicts in content sharing.
MediationBot Facilitating Conversations to Resolve Privacy Conflicts in Content Sharing

This blog post is related to the paper “On the Potential of Mediation Chatbots for Mitigating Multiparty Privacy Conflicts — A Wizard-of-Oz Study” by Kavous Salehzadeh Niksirat, Diana Korka, Hamza Harkous, Kévin Huguenin, and Mauro Cherubini. https://doi.org/10.1145/3579618

The multiparty privacy problem

Imagine the situation. You go to a party, have too much to drink, and the next day discover that someone has shared unflattering party photos of you on social media. Every day, tens of thousands of photos and videos are posted to social media platforms without the consent of the people included in them, despite such content being widely considered as co-owned. And many of these non-consenting individuals say that they suffer harm as a result, ranging from embarrassment to cyberbullying, discrimination, and public shaming.

The disagreement arising from this type of non-consensual content sharing (technically known as Multiparty Privacy Conflicts — MPCs) is a significant problem. The mainstream media is full of anecdotal evidence of the inconvenience and distress caused by the non-consensual sharing of multimedia on social platforms. It’s an issue that can affect anyone — as several senior politicians have discovered to their cost. One large-scale survey of internet users [Such et al. 2017] found that 99% of respondents had experienced an MPC on social networks, with just a third of these incidents being resolved and, in most cases, after the image had been shared.

Unfortunately, it’s a problem that lacks an effective negotiated solution that both accounts for the wishes of the individuals involved and maintains the utility of the sharing platform. Various solutions have been suggested, usually involving some modification of the content, blurring, for example, to prevent the identification of a non-consenting individual (data subject) or limiting who can see the content. But these tend to be non-collaborative, where the person sharing the content (the uploader) unilaterally decides what to do. Most collaborative approaches remain within the realms of theory, technically unproven or tested.

Our research, however, suggests that using a familiar digital tool — a chatbot — in a mediation role to incorporate conflict resolution can help people involved in MPCs agree on an acceptable way forward together.

Our solution: MediationBot

Our MediationBot solution emerged from preliminary user-centric participatory work [Niksirat et al. 2021] that invited content sharers into the laboratory to discuss their MPC experiences. One suggestion was to involve mediation in some way. Although human mediation isn’t feasible because of the volume of content sharing, we thought it might be possible to use conversational agent (CA) technology to create a task-specific chatbot that performed a similar role.

The MediationBot solution takes the uploader and data subject through several steps in order to minimize MPCs.

Take the simple example of a photo taken of two people at a party, where one of those people wants to share the photo on social media. When the uploader attempts to share the photo, the content is flagged, and a chat space opens where the chatbot, uploader, and data subject can interact. As over 90% of content is shared without asking for consent, the process begins by suggesting that the uploader uses MediationBot to help navigate the consent process. Next, the data subject is joined to the chat, and the uploader and data subject engage in a structured dialogue prompted by MediationBot, where they are encouraged to discuss how they feel about sharing the photo, e.g., their reasons and reservations.

If consent is not reached initially, an important ‘middle-ground’ step follows where the chatbot facilitates discussion around some middle-ground measures that might help create agreement, such as cropping, blurring, or untagging the photo or restricting sharing access. The uploader and data subject may also come up with their own solutions during this step.

If there’s still no agreement, then the chatbot session ends with a final message that attempts to dissuade the uploader from sharing the photo. The exact wording of this message, as well as any action taken if the uploader decides to end the mediation process and share without consent, will depend on the policies of the social media platform, but this is beyond the scope of our solution.

Testing and results: Less conflict, more agreement

To test our MediationBot solution, in principle, we devised a realistic simulation where pairs of uploaders and data subjects who knew each other could role-play a negotiation over whether to share an image, assisted by the ‘chatbot.’ Participants were aged 18 to 24, as research suggests MPCs are more common among young adults. The image was potentially distressing to the data subjects, and each scenario started with the data subject not wishing to consent to share.

The uploader and data subjects used smartphones in separate rooms, with an assistant in another room assuming the role of the chatbot by sticking closely to the decision-tree script that we designed, piloted, and refined to guide the MediationBot’s interactions. For comparison purposes, the simulation was also carried out without the chatbot mediating.

The results were very encouraging. For example:

  • Using the MediationBot promoted greater agreement to share or not share between the participants compared with the sessions where uploaders and data subjects interacted without mediation.
  • Far more participants in the MediationBot scenario used middle-ground solutions to find agreement and said in their interviews afterward how much they appreciated these options, which they would not have considered otherwise.
  • People felt that the MediationBot improved the quality of the interaction, enabling them to have a more structured and meaningful conversation — to express their views, be heard, understood, and treated with respect.

The future: Implementing MediationBot

From our results, it is clear that the MediationBot concept shows considerable promise as a collaborative tool for helping to solve the MPC challenge. The process will benefit from further development and refinement, such as testing the chatbot in real-world scenarios rather than in the laboratory, using Large Language Models (e.g., ChatGPT), and incorporating multiple subjects. This should all help to make the overall intervention more seamless and natural.

As concern around digital privacy grows (privacy is recognized as a basic human right — Art 12, UDHR), the push for implementing our MPC solution on social media and content-sharing platforms may well come from regulators rather than the platforms themselves. Not that there is anything to prevent the platforms from taking a lead on this. Users may choose to migrate to more privacy-conscious social media, too. But whatever the driver, when it comes to sharing content on social networks, it should only be a matter of time before MediationBot’s conflict resolution skills are helping to keep everyone in the picture.

For more details about our work’s methods, findings, and implications, please check out the full paper here, and for further questions or comments, please get in touch with the first author.

--

--