Member-only story
A Paradigm Shift in Social Media Moderation with Tamed AI Agents
Next generation technology can largely solve current issues with social media platforms.
The world of technology has been rapidly evolving, and with it, the issues we face are becoming increasingly complex. One such challenge that has been particularly thorny is the matter of content moderation on social media platforms. Until now, solutions have either relied on limited automated tools or human moderators, leaving users with little control or transparency.
Myriad controversies, such as biases in moderation, opaque decision-making, political divides, and the role of algorithms, further exacerbates the situation. Interestingly, studies emerging from universities like Wharton are revealing that the large ambiguities and controversies inherent in content moderation may not only stem from the nebulous nature of defining what is ‘objectionable’, but it also considerably depends on the financial incentives of the platforms themselves. The relationship between a platform’s revenue model and its process of content moderation provides a new perspective on understanding these controversies.
Platforms like Facebook and Twitter, mainly reliant on advertising revenue, can ill afford to lose user engagement and end up employing a fine…