LinkedIn Releases a Report on Its Automated Fake Account Detection Efforts

With Twitter recently deleting millions of fake accounts and Facebook disabling 583 million fake accounts in the first quarter of this year, social media platforms are taking accountability and responsibility in cracking down on fake accounts. Like Twitter and Facebook, LinkedIn is also making strides in cutting down on identifying and removing fake accounts.

According to a statement by LinkedIn, the professional networking company is implementing careful measures to govern the integrity of its platform and protects its users.

LinkedIn states, “to maintain a safe and trusted professional community on LinkedIn, we require that every LinkedIn profile must uniquely represent a real person. One of the ways we ensure that accounts are real is by building automated detection systems at scale for detecting and taking action against fake accounts.”

From scraping, phishing, spamming, fraud, and other malicious intent, the reasoning for groups and users who abuse LinkedIn range widely. Recently, the U.S. government believed that China was using fake LinkedIn profiles to recruit American spies, and called on the social media company to intervene.

LinkedIn’s method in counteracting attacks on its platform involves implementing a “funnel of defenses to detect and take down fake accounts at multiple stages,” as shown below:

Source: LinkedIn

LinkedIn recognizes that most attackers need a large number of fraudulent accounts. This is where the top of the funnel, registration scoring, comes into play. As explained by LinkedIn, “For many types of abuse, attackers require a large number of fake accounts for the attack to be financially feasible. Thus, in order to proactively stop fake accounts at scale, we have machine-learned models to detect groups of accounts that look or act similarly, which implies they were created or controlled by the same bad actor.”

A machine-learned model is used to assess every new user registration attempt. While registration attempts that pose a low abuse risk score are allowed to register right away, those with a high abuse risk score are prevented from doing so. Signup attempts with medium risk scores have to be verified through LinkedIn’s security measures. According to the figure below, the model blocked five million fake accounts from being created in less than one day:

Source: LinkedIn

If it doesn’t have enough information at this point, LinkedIn employs other models to determine if accounts are fake. By grouping clusters of accounts based on shared attributes, LinkedIn is able to identify suspicious behavior. As explained, “first, we create clusters of accounts by grouping them based on shared attributes. We then find account clusters that show a statistically abnormal distribution of data, which is indicative of being created or controlled by a single bad actor.”

For fake accounts that aren’t created in bulk, activity-based models are used to catch them. “We have many models that either look for specific types of bad behavior typical to abusive accounts or behavior that is anomalous. Additionally, our systems have redundancy, which ensures that fake accounts not caught by the early stages of our defenses (top of the funnel) are eventually caught by the latter ones (bottom of the funnel).”

LinkedIn says that “a human element will always be necessary to catch fake accounts that have evaded our models,” where they take into consideration the reports from members who identify suspicious activity. To govern the detection of fraudulent profiles extra carefully, LinkedIn has a team of investigators that evaluates any fraudulent behavior.

The concern over fake accounts and attacks is becoming increasingly problematic in the social media space. However, it’s good to know that major social media platforms are taking control in protecting its platform and maintaining trust with its users.

The full report on the automated measures LinkedIn’s uses to detect fake accounts can be read here.