An Exercise In Identifying Fake Instagram Profiles With TensorFlow.

Instagram has had, for a long time now, a problem of fake (& spam) profiles. Insights, an app I built for managing Instagram followers, is already providing a suite of related metrics so naturally, I thought that I’d build out a way to estimate what percentage of your followers are authentic. What better way to tackle this simple classification problem than with some machine learning.

This post goes over the process and learnings of trying to build out this feature — spoiler: it’s not as useful as I hoped it would be, but more on that later.

The first task was to build the classification model. I chose to use TensorFlow so that I would later be able to leverage the Firebase ML Kit to ship the custom model to users.

Data collection.

To begin building the model I first needed to collect some data. This came in the form of Instagram profiles. I quickly built a crawler to download follower and following lists for a number of users from my own friendship graph. An example of the data that was retrieved is as follows:

A single profile returned as part of a relationships API request.

This isn’t quite enough information to determine whether a profile is authentic or not. Intuition suggests that fake profiles have a few obvious attributes — generated usernames, dummy profile names, sketcky bios, and among other things, heavily skewed follower to following ratio. To get the data that was missing, I needed to fetch complete profile details, which would provide more fields to use as potential factors:

This, however, is not ideal as it requires a new HTTP request for each user and Instagram eventually throttles the API requests. For the large collection of accounts that I would eventually need to be able to make predictions on, this did not work at all and as a result, the original idea was no longer feasible. I decided to push on to see what the results were and if maybe later I could solve this issue.

I proceeded to download the profile data and then manually classify profiles that were obviously spam or not using a tinder style left/right swipe method — leaving profiles that I was unsure about as authentic.

An example classification via a command line tool — in this case, the profile in question is classified as inauthentic.

Factor identification and pre-processing.

The next step in the process was to identify what factors would be used from the data in the training and eventually the predictions. After some testing I settled on the following factors:

  • The username type — encoded via regex into integers.
  • The number of distinct “words” delimited by dots or underscores in the username.
  • The length of the full_name.
  • Number of Unicode characters in the full_name.
  • The number of distinct “words” delimited by dots, underscores or spaces in the biography.
  • Number of Unicode characters in the biography.
  • The ratio of followers to following, and the individual counts.
  • The number of posts (media_count) on the profile.
  • The number of posts which have tagged the profile (usertags_count).
  • Whether the profile is_private or not.
  • Whether the profile is_verified or not.
  • Whether the profile has_anonymous_profile_picture or not.
  • Whether the story reel archive setting is configured ( reel_auto_archive).

The next step was to feed this data and train the model. I had approximately 1000 data classified profiles that would be used for the training and evaluation. Once the profiles were vectorized I had the following data:

Training set shape: (775, 16)
Test set shape: (194, 16)

I then fed the vectorized profile data into the following Keras configuration.

The results of the evaluation on the hold out set are as follows:

Confusion matrix for profile authenticity (Y = authentic).         Pred.                |   Total
Y N |
Act. Y 60.00 | 4.00 | 64.00
N 6.00 | 124.00 | 130.00
----------------------------------------
Total 66.00 | 128.00 | 194.00
Accuracy: 0.948453608247
True positive: 0.9375
False positive: 0.0625
Precision (Y): 0.909090909091
Precision (N): 0.96875

~95% Accuracy (correct classification rate) is great! That being said, I noticed that the accuracy and precision fluctuate between ~85–95% in replications of the model training — this just happened to be one of the more accurate runs.

Putting it all together.

The next step after we had our model ready was to package it with the app so that we could make it useful. To test that this worked correctly, I quickly threw up a screen which would query the username and spit out our authenticity prediction. The results are pretty awesome.

A quick demo of the model in the actual app — a prediction of 1 indicates an authentic profile the adjacent number is the probability of the profile being authentic.

Conclusions.

After playing around with it more, the model seems pretty okay at making the correct distinction between accounts that are clearly fake and those which are not. That being said, it’s not the most useful — it is only able to handle one profile at a time which is not really useful because it's fairly easy for a human to tell whether an account is fake/spam or not, so it doesn’t really make sense for a user to do this manually.

The next steps are to determine if I can query the profile data via Instagram’s GraphQL library which may allow bulk account lookups. If this approach turns out to work, it may end up being feasible to build out the original feature.

While this experiment wasn’t entirely fruitful, it was a great introduction into building a real feature with machine learning. Moreover, it really speaks to how easy it is to use ML in your own apps, so long as you have reliable access to the data you need to make use of the model.

For those interested in the source code and data, the below GitHub repository houses the sample code.

Thanks for reading!

If you have any more feedback, comments, or if there is a glaring mistake, let me know below!

Building cool stuff — https://karn.io