danielequercia
Jun 14 · 2 min read

Social media has been recently found to exacerbate political polarisation and violence, and current technological solutions have mostly focused on detecting and punishing those misbehaving (terminating the accounts of those engaging in bullying or hate speech) rather than celebrating those with healthy habits (the silent majority).

To tackle that challenge, we have recently developed algorithms that automatically promote the healthy circulation of valuable information.

These algorithms are based on the psychological concept of Integrated Complexity. This refers to a person’s ability to bridge opposing points of view. Recent work in Social Psychology has found Integrated Complexity to be related to the person’s use of language (for example, to the use of figurative expressions like “on the other hand”). Based on that literature, we trained machine learning algorithms that are able to spot markers of high Integrative Complexity in social media posts. For example, “Death penalty is definitely appropriate to punish all murderers. Society is better off without this type of criminals” is a sentence with low Integrative Complexity: it states one, non-negotiable point of view. Instead, the statement “Death penalty is an understandable attempt to right a wrong, but it does so with a similar wrong action. We should think about aggravating punishments for brutal crimes, but without inflicting physical harm to anyone” has higher Integrative Complexity because it recognizes the reasons of two opposing views and attempts to find a middle ground.

The general idea is that, by preferentially promoting content contributing to an healthy dialogue, social media platforms will be able to filter away violent speech and encourage the circulation of valuable information.

The algorithms were tested on the social media site of Reddit and were found to be able to accurately identify posts high in Integrated Complexity. Interestingly, these posts were characterised not only by what users said in them (e.g., the topics) but also by how they were syntactically constructed (e.g., the abundant use of adjectives and complex subordinates was associated with high Integrated Complexity).

Limiting or even banning social media won’t stop violent speech. A content-filtering AI, instead, could be one of the ways to promote healthy online environments in the near future.

For more information, please feel free to contact @lajello. The paper is titled “The Language of Dialogue Is Complex”, is published in the Proceedings of the 13th International Conference on Web and Social Media (ICWSM), and a pre-print of it is available https://arxiv.org/abs/1906.02057

SocialDynamics

Social Dynamics Group

danielequercia

Written by

social media researcher @ bell labs

SocialDynamics

Social Dynamics Group

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade