Fighting Scams With Confidence — Part I

Ng Wing Yiu, Data Scientist

ScamShield is a mobile application developed by Open Government Products to block scam calls and SMSes. This article is Part I of a series, about how we built the model that powers ScamShield’s message filtering function. Part II covers the mobile application.


Example of a scam message filtered by ScamShield

Have you ever received an SMS message asking you to take a loan or to participate in online gambling? Most likely, it was an attempt to scam you. Based on crime statistics published by the Singapore Police Force, the total number of scam cases reported increased by 53.1% from 2018 to 2019 with victims cheated out of over S$165 million. With a growing number of scams, we decided to tackle the problem by building ScamShield during Open Government Products’ annual Hackathon in January 2020. Our hypothesis was that if we could block the scammers from reaching you via SMS in the first place, the scam would not take place. This post will focus on how we built ScamShield’s message filtering model.

Early versions of ScamShield

Message filtering is a classic text classification problem where we have to predict whether a message is ham/spam. We started off with the SMS Spam Collection Dataset to build our first model using pre-trained BERT from spacy-transformers. As the dataset lacked messages in our local context, the model precision was <20% initially.

As such, we ran an internal trial for 6 months to collect more data from our internal testers and used that to iteratively fine-tune our model. To beef up the dataset further, we used data augmentation techniques to teach the model different variations of a similar message. For example, we put in [MASK] tokens and used the BERT Masked Language Model to fill in the blanks — this allowed us to use different words to convey a similar message. We also replaced OTP codes or phone numbers with randomly generated numbers to form different permutations of a message.

Example of data augmentation

Over these 6 months, we repeated the process of data labelling, augmentation, and retraining every two weeks and we saw the performance gradually improve over time. At the end of the trial, we achieved >95% precision ( i.e. we misclassified a legitimate message as spam only <5% of the time).

Preserving user’s privacy with on-device inference

During the trial, the model was hosted on our servers, which meant that all messages would have to be sent to our servers for classification. We knew that this would not be practical in production as users would not want to have “Big Brother” looking through their messages. Hence after proving that we could filter spam messages fairly well, the top of our priority list was to migrate our model and have it run on the user’s device. This would remove the need to process personal data on our servers, preserving user privacy.

As spacy-transformers did not have a way for us to build mobile compatible models, we had to rebuild the model in tensorflow then convert it to a mobile-compatible form using tensorflow-lite.

Extracted from TensorFlow Lite Guide

Just a few lines of code! Easy-peasy right?

Not Exactly Easy-Peasy

Problem 1 — Size constraints

BERT was way too large. Even though the tensorflow-lite conversion process quantized the model and reduced its size by 4x, the model size was still >100 MB — almost as large as your social media apps. So we decided to use ALBERT (A Lite BERT) which was 34 MB after quantization.

Problem 2 — Memory constraints

As ScamShield ran on the iOS SMS filtering extension, there was a 6 MB memory limit imposed by iOS, significantly lower than those imposed on a foreground app. ALBERT required >6 MB of memory to run inference for an incoming message, so this became our bottleneck. At this point we had no choice but to abort using ALBERT for our model.

Finding an alternative to BERT / ALBERT

As there were a limited number of models compatible with tensorflow lite, we decided to give it a final shot using the average_word_vec model from Tensorflow’s guide. The average_word_vec model had only 4M parameters compared to BERT’s 110M parameters. After quantization, our final model was 3.9 MB in size and we were able to run inferences within the 6 MB memory constraint. Model performance was satisfactory with ~99% precision and 95% recall, similar to what we had with BERT previously.

Final Touches

On top of the model, we added some final touches to minimise false positives in our classification. We also devised a method to deal with novel scams quickly, beyond relying on the model.

Reference against blacklisted templates

Scammers are innovative and they come up with new scam messages all the time. While we do continue to retrain the model to adapt to new scam messages, the retraining process takes time. To make sure we could combat scammers in a timely manner, we maintained a list of blacklisted messages that were known false negatives based on user reports. For each incoming message, we used levenshtein distance to compare it against our blacklisted messages and filter the message to junk if there was any close match.

Setting an 80% confidence threshold

Advertising messages tend to be borderline spammy as many of them contain clickbait content, along the lines of “click this link to …” which are syntactically similar to many scam messages. But ultimately advertising messages are not scams, and they could contain useful information for our users. We realised that we should therefore set a higher confidence threshold before we flag a message as a scam.

Pushing the confidence threshold from 0.5 to 0.8, we improved precision from 97.9% to 99.2% at the expense of a drop in recall of scam messages from 97.4% to 94.6%. We were willing to make this tradeoff as we wanted to be extra cautious about accidentally filtering away important messages.

Precision-Recall Curve to determine the optimal probability threshold

This is what the decision flow looks like when ScamShield detects an incoming message (*the filtering is only applied to messages from unknown contacts). If a message is determined as a scam, it will be filtered away into a user’s junk inbox, similar to how junk emails function. The entire process runs solely on the user’s phone.

Process to determine whether a message is scam

Key Takeaways

  1. Do everything you can to preserve user privacy. Build trust between users and your product — it lowers the barriers to entry.
  2. Do not be disheartened if the model performs poorly at first. Collect feedback and improve your model iteratively. Consider using data augmentation to enrich your dataset if you have difficulty with data collection.
  3. Sometimes a machine learning model is simply not enough. Depending on the business context, you may want to have your own add-ons to work together with the model.

Read on to find out more about the mobile application in Part II!

Open Government Products is a team of engineers, designers, and product managers who build software systems that improve the public good. If you’re interested to join us, click here to apply.


Lennard Lim, Product Manager
Daniel Khoo, Software Engineer
Aaron Lee, Software Engineer
Seah Chin Ying, Software Engineer
Huang Kaiwen, Software Engineer
Hafizah binte Abu Husin, Designer
Christabel Png, Designer
National Crime Prevention Council (NCPC)
SPF Anti-Scam Centre



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Open Government Products

We are Open Government Products, an experimental division of the Government Technology Agency of Singapore. We build technology for the public good.