Bumble Tech
Published in

Bumble Tech

Multilingual message content moderation at scale

Part 1: introduction, model’s design and production infrastructure

When a member receives a message within our app that could be harmful or hurtful to the reader, thanks to the model we’re able to check in with the member in real-time through a pop-up message

Transformers-based architectures and Foundation models

Since their introduction in 2017, Transformers-based architectures have been the de-facto standard for state-of-the-art Natural Language Processing (NLP) applications. Like any other form of Recurrent Neural Networks, of whatever form, they are designed to handle sequential data (inputs where sample order impacts their meaning, e.g. text) with the four main differences being: the complete disposal of recurrence and convolutions; the removal of the need to process data in order; the enhancement of parallelisation capabilities; and shorter training times.

A Transformer composed of two stacked encoders and decoders. Thanks to Alammar, J (2018). The Illustrated Transformer [Blog post]. Retrieved from https://jalammar.github.io/illustrated-transformer/

Design and multilingual validation

XLM-RoBERTa (XLM-R) is a perfect example of a foundation model, thanks to its 270 millions+ parameters and its training set of 2TB data in 100 languages. Originally trained on 500 GPUs in a self-supervised fashion (Masked Language Model), it tries to predict the masked token in the input sentence. It has been shown to outperform mBERT and to have reliable performances both in high and low resources languages. The absence of language embeddings in the input made it a perfect choice for the problem we wished to solve.

Production infrastructure and business impact

Thanks to a successful partnership with our Engineering team, in the Data Science team we have been developing machine learning solutions at scale for a while. We now have an internal suite of services for deploying and monitoring Tensorflow-based, deep learning models, guaranteeing both high performances and full observability. Working on our NLP solution on the back of this impressive track record and real-world experience allowed us to make educated design decisions and has reduced the gap between experimentation and production deployment, a common pain point for machine learning projects.

Simplified architecture of the internal deep learning inference service.
A naive Tensorflow implementation of the tokeniser is required by XLM-Roberta. It is very similar to Sentencepiece but requires additional work around some tokens

Conclusion and next steps

Our mission is to create a world where all relationships are healthy and equitable. This has its challenges, especially when it comes to machine learning and NLP. We embarked on a journey to ship a native multilingual engine, able to keep our users safe whatever language they use to send messages on our platforms. Luckily in recent years, academic researchers have developed impressive architectures and methods that have been extremely helpful in paving the way for us to develop the engine that powers the Rude Message Detector and numerous other internal solutions.

--

--

We’re the tech team behind social networking apps Bumble and Badoo. Our products help millions of people build meaningful connections around the world.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store