Suddenly your users are complaining about purchases they didn’t make. They can’t access their accounts. They’re screaming “I’m hacked” but have no idea why, or how. Your customer service team is overwhelmed with questions.
Account takeover has arrived. Now you need to answer this question on every single user login:
The user sent the right password, but is it really the user?
Let’s talk about building a defensive program around “Account Takeover”, and make sure that users are who they claim to be, even if they’ve successfully authenticated.
Why does this happen?
Bad guys make lots of money from spam. Spam is more effective with real accounts.
Governments break into protestor accounts. At Facebook, attacks like this preceded the Tunisian revolution.
You can steal money stored in accounts. Paypal has dealt with this for over a decade. Bitcoin companies are starting from scratch to deal with it.
This list is not exhaustive; there’s a long tail of reasons the bad guys will grief your users.
How are accounts taken over?
As you discover how many methods there are here, it actually becomes less important to enumerate every threat. However, here’s a topical list.
Credential dumps are a massive source of these attacks. For instance, Adobe lost a huge stash in 2013 which still surprise startups who aren’t inspecting logins.
Keylogging malware could steal the password of a user and replay it.
Classic phishing attacks are still prevalent and trick the user into sending credentials to the wrong domain.
“Man in the Middle” attacks occur, which can steal passwords off the network between the user’s device and the application.
Mature companies have realized that simply having the correct username and password is not enough to trust that user with access. Security teams employ the following programs to weed out abuse from malicious (but successful) authentications.
Keep in mind that there are many half-solutions involved here, because the underlying platform (the password) is flawed. When many half-protections are all employed it can be extremely difficult to takeover accounts in bulk, even with the victim’s passwords at hand.
This is a set of logs your application will use and is the cornerstone of all other protections — a known history of the IP addresses, cookies, and browsers that your user has provided since you’ve known them. They guide your decisions to further challenge the user. Here is an example set of data about a user:
Now, consider a new event when “magoo” logs in from 184.108.40.206, with a familiar cookie, from Chicago, and using the Chrome browser. With the above data, you can limits your suspicion of this user and eliminates entire categories of potential attacker.
Only in a complete and total compromise will this user turn rogue. For instance, a rooted / malware infected device, or a gun behind their head.
With a strong login history, you can design your infrastructure with the appropriate challenges to the user and their identity. You can even decorate this data with “very trusted” cookies and IP addresses, based on whether they registered with that ID or performed an action that earns trust for that location, like multifactor or a history of purchases without chargebacks.
Unrecognized IP + Cookie? Ask for email confirmation!
Some companies put a ton of effort into identifying the browser in this decision making process. These techniques are used by ad companies and is contentious around privacy and anonymity. However, these same techniques are the strongest protections that protect every day people against government snooping, fraudsters, and spammers, so it depends on the hands its in. Most technology in security has this dual edge.
You can automate judgement calls based on the reputation of the user IP address.
Depending on your product, you may not expect to see authentications from known open proxies, known infected hosts, Tor, co-located data centers, EC2 instances, etc. You may only expect mobile carriers, consumer IPs; those sorts of ISP’s.
Almost every security company will sell varying types of this data. You can use it to make judgements (along with login history) on whether you want to throw new hurdles in front of the user.
You can also build this database yourself, by profiling different ASNs that have real users, previously known bad netblocks, and straight up blacklists of known malicious CIDR ranges.
If a user is authenticating and deviates from their known history — we can ask that they multifactor (MFA), if your application supports it. One such popular method is requesting a One-Time-Password with Google Authenticator.
There is a massive misunderstanding of how Google Authenticator helps against active phishing attacks. It doesn’t!
Phishing pages that spoof login forms easily steal the OTP from users.
However — OTP based MFA is a great defense against hackers with credential dumps trolling your platform. In that case, the bad guy does not interact with your users and attempt to steal their tokens, so OTP based MFA still has value for users who have re-used their passwords on your site.
The best case here is something out of band, like a Duo Mobile push, where the potential victim is also informed of the IP address trying to login (which likely isn’t in their region). If they’re not at keyboard, it’s extra suspicious.
Again — tradeoffs. Most people don’t have Duo Mobile installed, the Google Authenticator app is more prevalent. In most cases, the users probably has neither, which leaves something like SMS, which can also be attacked. Also, SMS deliverability is very poor, especially with international users.
Every decision here has a usability, reliability, or security tradeoff.
When a user authenticates with a new “device”, you can decide to challenge the user over email, asking them to confirm that device from an account a potential attacker wouldn’t have access two. Here are two examples:
Notice the nuances. On the left, clicking the link verifies the user, giving opportunity to observe the user’s browser and if the user is actually on the device that initiated the login. These two individuals (good guy / bad guy) in this situation would have revealed that one does not have access to the victim IP and email account. If they differ — it could be a bad guy interfering.
However, this breaks if the user confirms the email from their cell phone, or another computer, which is the same characteristic of a phisher and is a false positive. This potential for breakage is a trade towards a very strict approach with better security and away from a good user experience.
On the right, we have a token the user will have to put into their game or website. This can still be phished by the attacker. But — it still gives opportunity to the potential victim that something is suspicious and greatly reduces the success rate of a phisher. This is more relaxed and a better customer experience while providing some security, and Steam has many other restrictions to support the tradeoff, like delayed trade restrictions.
The Steam marketplace has a complex fraud process based on many other signals.
In both cases, had I used a familiar device in my login history, I likely wouldn’t have needed to go through this process.
A huge downfall is if the user shared the same password with their email account which would also be compromised. The user is almost certainly out of luck. They will need serious support from their webmail provider to get back on their feet. It’s extremely hard to trust a user after that.
Most phishers don’t know much about their victim. For instance, I don’t know who you’ve been emailing, where you’ve been shopping, who your friends are, etc. Facebook’s “Social Authentication” here describes this well:
This “who is this?” check occurs after the password, but before a complete authentication. The user must prove that they know something about the internals of their account (in this case, friends!). You could lean on information from your own user data to challenge the user with a question that is only easy for them to answer.
The downside: targeted attacks. It’s very hard to prevent a dedicated attacker form learning this sort of info from real world channels. But, in a credential dump, targeted attacks would not scale to potentially hundreds of thousands of victims, so this defense has its place for specific types of threat.
SPF / DMARC / DKIM
These are DNS level policies you can use to prevent your domains from being spoofed in email. Setting this up will drastically reduce classic inbox phishing attacks against your users. There are still many other methods a phisher can use, but spoofed email has been a staple method of phishing for decades, and surprises companies that don’t have this setup.
There are issues here, though. To have the most aggressive stance on spoofed email, you’d want a DMARC “reject” policy, asking the largest email companies to drop all email that you haven’t signed yourself.
This will break anything you’ve setup internally that sends emails on your behalf from marketing, recruiting, or sales vendors. All vendors that use your domain in email should be ready to handle DKIM by configuration, or it will break and another part of the company will come yelling at you.
So in most cases, companies will set a “quarantine” policy, which is a decent middle ground to sends them to the spam folder. As shown above, PayPal asks for a more aggressive approach, rejecting entirely.
If you have user content that leads to websites off of your platform, you may see abuse leading people to websites spoofing your brand. You may have seen websites using a “Link Shim” which is useful here.
By warning users that they’re about to view untrusted content, it’s a very easy way to stop casual phishing from your own content. This is only relevant if the phishing attacks are happening on your platform through messaging products. MySpace and Facebook launched this early on when they started experiencing account takeover based spam. It was a critical early defense during my time at Facebook when we started seeing phishing attacks launched from wall posts.
When a phishing site is discovered, you want the hosting provider to nuke it from the internet. The URL can also be blacklisted in browsers. Blacklisting is becoming the more important piece as the world is now mostly on browsers with blacklists. There was a time where the browser blacklists were not so advantageous, so taking down the site via hosting provider was more successful. This is crossing over to where takedowns may not even be worthwhile if your abuse issue is entirely browser centric.
There are several companies that will “take down” phishing hosts that are clearly malicious. Here is a list. You use their API’s or email and report URLs to them to coordinate “takedowns” with the ISP hosting the content.
These companies costs vary greatly, but my experience is that you should demand subscription pricing and avoid any per takedown cost. Costs should scale with minor proportion based on your phishing volume as they should automate most of the verification and submission to blacklists and security companies.
If you have a very small phishing issue, this should be very cheap as their job won’t be high effort. The company should be incentivized in your security, and “per takedown” pricing is not a good model. Also, familiarity with the facets of your product being abused is important, instead of just focusing on email based attacks.
Your customer service reps will have the most exposure into Account Takeover, and should be your greatest ally. They will bring you the latest attacks and how much damage they cause. This is your source of prioritization as you build these functions into a program.
Give support direct access to takedowns, build tools used to review and investigate suspicious account history, the ability to suggest and blacklist IPs or netblocks (be careful!), and to discover other compromised users. Make them soldiers against these attacks, and loop that feedback into defense automation.
This sort of manual investigative loop can feed into “State Sponsored Attack” alerts that you may have heard about. Manual review of an attack could discover a motive, which may warrant an alert to the potential victim. The information potential victims report to you is useful in discovering motive.
For instance, if 100 potential victims all report to you that they belong to a protest group, you may have some useful information to warn the other 900 with.
Tor offers great protections for good people against traffic analysis, but is still considered a very special type of “bad IP” because of the hellish amount of abuse that comes through it if left untouched. Companies like Facebook still allow Tor traffic and go so far as having a certificate for its .onion address.
The tradeoff here is that companies allowing Tor turn on pretty much every available hurdle on for this experience. Most companies concerned with account takeover run similarly. Even if they don’t explicitly treat Tor as “bad”, any learning algorithm will eventually do so.
When a credential dump occurs, large companies race to reset any passwords on their property. They have infrastructure to ingest millions of usernames and passwords, crawl through their own user data, and mail + reset passwords of those users with special instructions. Facebook in particular does this in a large way.
This has a great network effect of warning users and ultimately protecting the larger internet from credential re-use. The challenge is getting a hold of the credential dump, which usually find their way through hacker forums eventually.
Check out Scumblr from Netflix to bootstrap engineering around this:
THIRD PARTY AUTH
If your company is willing to sacrifice control of their own auth, you could also lean on Facebook, Google, or Twitter auth and lean on the protections they give you. They all employ many of the above protections since they’ve been fighting these sorts of attacks for around a decade. Attacks will still make their way through if you’re an attractive target for account takeover, but it’s a cheap / quick way to make a leap in security.
Writing an article like this is hard because it only reiterates how bad the password is. When you work on account takeover long enough, you will wish every individual carried a secure element holding a private key used for authentication.
Right now I am cheering on U2F, password managers, any hardware crypto for identity, and products similar to the Duo Mobile push. Even these solutions are incomplete. We face massive usability issues if we aspire to make passwords, and this article, irrelevant.
I really wish I knew how to solve that, but I don’t.
I’m a security guy, former Facebook, Coinbase, and currently an advisor and consultant for a handful of startups. Incident Response and security team building is generally my thing, but I’m mostly all over the place.