Image courtesy of BelugaLinguistics.com

Facebook, AI censorship & content moderation

Belugas conclusions on the Zuckerberg’s Congress testimony, AI algorithms and content moderation and the question of transparency.

Una Sometimes
Beluga-team
Published in
4 min readMay 11, 2018

--

The whole world watched as Mark Zuckerberg testified before the Senate and House, facing Congress to discuss data privacy in the wake of the Cambridge Analytica scandal. (If you haven’t read about the issue, you’ve at least seen the memes 😂.)

Via Giphy

But despite data leaks or Cambridge Analytica another subject came to our attention, namely content moderation —and Facebook’s usage of AI. A lot of people are concerned with the role of AI in private censorship, or automated content filtering inevitably resulting in over-censorship.

Zuck and his team deploy AI tools to identify specific key points such as trolls, fake news, terrorism, hate speech, racist and sexist ads and so forth. The problem: Facebook is kind of a dud on this subject and already has a lot of trouble in reliably flagging inappropriate content, even with the helping hands of thousands of content mods. *sigh*

And problems keep coming Facebook’s way, because even state-of-the-art AI language recognition models still rely on keyword tagging and Hash-matching algorithms, meaning that machines are still far from performing the job as a human does. And the problems keep coming with machines being humor blind and not able to distinguish between sarcasm or irony. In a perfect world, this seems solvable, but on top of that follow regional linguistic slang, cultural norms, local regulations and small languages (in which the FB AI simply isn’t trained yet). Even Zuck had to admit that:

“ Hate speech is very language-specific(…) and It’s hard to do [moderation] without people who speak the local language.”

Yeah,… that kind of figures Marky boy. But let us look at an ideal model and think about a solution of how AI could operate on the platform. Adequate AI content mod systems would adapt to social norms, learn and evolve with the “offensive content”. Which implies the machine’s learning cultural context, AI expanding to “small language groups” and so on. Finally, a perfect world would have machines produce data sets and evaluation metrics for these issues, thus continuously improving themselves.

Via Giphy.com

Ethical issues with decision-making AI

We did do a piece about how the programmed software always reflects the bias of its creator and is thus becoming more prejudiced. So the questions we are asking, are how Facebook’s algorithm can be free of such prejudice and if it isn’t, how can we hold it accountable? The necessity of transparency in coding and algorithms seems to become more important than ever. A solution would this ideal world we spoke about would be that Facebook being more open and communicating its flaws openly (for example keynotes). During his hearing at Congress Zuckerberg referred to a Facebook AI ethics task force, but didn’t elaborate on the responsibilities of such a team.

He then promised Facebook would become more vigilant and active in rooting out problematic content, upholding its own Community Standards and hiring a gazillion new content moderators. What he failed to mention is how the content mods would operate exactly and how his in-house policing might cause more censorship or lack of diversity and insight on sensitive topics…

When looking at Facebook’s content mod activity during the past years, we can see a pattern emerge. It has proven to be more than effective on censoring nudity, blocked a number of journalists accounts and deleted the content they were spreading and has effectively shut down civil rights discourse from minorities by flagging it as “hate speech”, yet remained oblivious to the rise of fake and alt-right accounts and internet trolls harassing its community members. Last but not least, private censorship comes into play from big corporations or even governments playing a key role in censoring the output of content.

Outlook and personal conclusions

As noted above, solutions are more than needed and improvements, even though promised in the Congress hearing — need to happen NOW! The pressure is on Facebook to scale up its standards and actually contribute to a democratic flow of information on its platform. Yet, we need to make sure that transparency is upheld and that the decisions being made, aren’t taking place behind closed doors. The balance between censorship and free speech needs to be addressed accordingly, without drastically silencing relevant voices or letting hate speech spread. For this, we need a solid conversation to take place, with transparency and internet data security being its biggest driving forces.

Via Giphy.com

What are your opinions on the subject of data security, censorship on social media? Let us know and join the conversation! :)

If you like this post we would really appreciate a 👏 or 👏 👏 or 👏 👏👏

For further information on the Congress Testimonial make sure to check out this video:

And if you really like us, make sure to ❤️❤️❤️ on our Instagram .We’d really appreciate it!

About Beluga

Beluga helps fast-moving companies to translate their digital contents. With more than a decade of experience, professional linguists in all major markets and the latest translation technology at use, Beluga is a stable partner of many of the most thriving enterprises in the technology sector. The business goal: To help fast-growing companies offer their international audiences an excellent and engaging user experience.

--

--