Combining ML Models to Detect Email Attacks

Jeshua Bratman
Abnormal Security Engineering Blog
9 min readNov 17, 2020

This article is a follow-up to one I wrote a year ago — Lessons from building AI to Stop Cyberattacks — in which I discussed the overall problem of detecting social engineering attacks using ML techniques and our general solution at Abnormal. This post aims to walk through the process we use at Abnormal to model various aspects of a given email and ultimately detect and block attacks.

As discussed in the previous post, sophisticated social engineering email attacks are on the rise and getting more advanced every day. They prey on the trust we put in our business tools and social networks, especially when a message appears to be from someone on our contact list (but is not) or even more insidiously when the attack is actually from a contact whose account has been compromised. The FBI estimates that over the past few years over 75% of cyberattacks start with social engineering, usually through email.

Why is this a hard ML problem?

A needle in a haystack — The first challenge is that the base rate is very low. Advanced attacks are rare in comparison to the overall volume of legitimate email:

  • 1 in 100,000 emails is advanced spear-phishing
  • less than 1 in 10,000,000 emails is advanced BEC (like invoice fraud) or lateral spear phishing (a compromised account phishing another employee)
  • When compared to spam, which accounts for 65 in every 100 emails, we have an extremely biased classification problem which raises all sorts of difficulties

Enormous amounts of data — At the same time, the data we have is large (many terabytes), messy, multi-modal, and difficult to collect and serve at low latency for a real-time system. For example, features that an ML system would want to evaluate include:

  • Text of the email
  • Metadata and headers
  • History of communication for parties involved, geo locations, IPs, etc
  • Account sign-ins, mail filters, browsers used
  • Content of all attachments
  • Content of all links and the landing pages those links lead to
  • …and so much more

Turning all this data into useful features for a detection system is a huge challenge from a data engineering as well as ML point of view.

Adversarial attackers — To make matters worse, attackers actively manipulate the data to make it hard on ML models, constantly improving their techniques and developing entirely new strategies.

The precision must be very high — to build a product to prevent email attacks we must avoid false positives and disruption of legitimate business communications, but at the same time catch every attack. The false-positive rate needs to be as low as one in a million!

For more examples of the challenges that go into building ML to stop email attacks, see the discussion Lessons from building AI to Stop Cyberattacks.

To effectively solve this problem we must be diligent and extremely thoughtful about how we break down the overall detection problem into components that are solved carefully.

Example:

Let’s start with this hypothetical email attack and imagine how we could model various dimensions and how those models come together.

Subject: Reset your password
From: Microsoft Support <admin@fakemicrosoft.com>
Content: “Please click _here_ to reset the password to your account.”

This is a simple and prototypical phishing attack.

As with any well-crafted social engineering attack, it appears nearly identical to a legitimate message, in this case, a legitimate password reset message from Microsoft. Because of this, modeling any single dimension of this message will be fruitless for classification purposes. Instead, we need break up the problem into component sub-problems

Thinking like the attacker

Our first step is always to put ourselves in the mind of the attacker. To do so we break an attack down into what we call “attack facets”.

Attack Facets:

  1. Attack Goal — What is the attacker trying to accomplish? Steal money? Steal credentials? Etc.
  2. Impersonation Strategy — How is the attacker building credibility with the recipient? Are they impersonating someone? Are they sending from a compromised account?
  3. Impersonated Party — Who is being impersonated? A trusted brand? A known vendor? The CEO of a company?
  4. Payload Vector — How is the actual attack delivered? A link? An Attachment?

If we break down the Microsoft password reset example, we have:

  1. Attack goal: Steal a user's credentials
  2. Impersonation strategy: Impersonate a brand through a lookalike display name (Microsoft)
  3. Impersonated party: The official Microsoft brand
  4. Payload vector: A link to a fake login page.

Modeling the problem

Building ML models to solve a problem with such a low base rate and precisions requirements forces a high degree of diligence when modeling sub-problems and feature engineering. We cannot rely just on the magic of ML.

In the last section, we described a way to break an attack into components. We can use that same breakdown to help inspire the type of information we would like to model about an email in order to determine if it is an attack.

All these models rely on similar underlying techniques — specifically

  • Behavior modeling: identifying abnormal behavior by modeling normal communication patterns and finding outliers from that
  • Content modeling: understanding the content of an email
  • Identify resolution: matching the identity of individuals and organizations referenced in an email (perhaps in an obfuscated way) to a database of these entities

Attack Goal and Payload

Identifying an attack goal requires modeling the content of a message. We must understand what is being said. Is the email asking the recipient to do anything? Is it an urgent tone? and so forth. This model may not only identify malicious content but safe content as well in order to differentiate the two.

Impersonated Party

What does an impersonation look like? First of all the email must appear to the recipient to look like someone they trust. We build identity models to match various parts of an email against known entities inside and outside an organization. For example, we may identify an employee impersonation by matching against the active directory. We may identify a brand impersonation by matching against the known patterns of brand-originating emails. We might identify a vendor impersonation by matching against our vendor database.

Impersonation Strategy

An impersonation happens when an email is not from the entity it is claiming to be from. To do so we identify normal behavior patterns to spot these abnormal ones. This may be abnormal behavior between the recipient and the sender. It may be unusual sending patterns from the sender. In the simplest case, like the example above, we can simply note that Microsoft never sends from “fakemicrosoft.com”. In more difficult cases, like account takeover and vendor compromise, we must look at more subtle clues like unusual geo-location and IP address of the sender or incorrect authentication (for spoofs).

Attack Payload

For the payload, we must understand the content of attachments and links. Modeling these requires a combination of NLP models, computer vision models to identify logos, URL models to identify suspicious links, and so forth.

Modeling each of these dimensions gives our system an understanding of emails particularly along dimensions that might be used by attackers to conduct social engineering attacks. The next step is actually detecting attacks

Combining Models to Detect Attacks

Ultimately we need to combine these sub-models to produce a classification result (for example P(Attack)). Just like any ML problem, the features given to a classifier are crucial for good performance. The careful modeling described above gives us very high bandwidth features. We can combine these models in a few possible ways.

(1) One humongous classification model: Train a single classifier with all the inputs available to each sub-model. All the input features could be chosen based on the features that worked well within each sub-problem, but this final model combines everything and learns unique combinations and relationships.

(2) Extract features from sub-models and combine to predict target — there are 3 ways we can go about this:

(2.a) Ensemble of Models-as-Features: Each sub-model is a feature. Its output is dependent on the type of model. For example, a content model might predict a vector of binary topic features

(2.b) Ensemble of Classifiers: Build sub-classifiers that each predict some target and combine them using some kind of ensemble model or set of rules. For example, a content classifier would predict the probability of attack given the content alone.

(2.c) Embeddings: Each sub-model is trained to predict P(attack) like above or some other supervised or unsupervised target, but rather than combining their predictions, we extract embeddings, for example, by taking the penultimate layer of a neural net.

Each of the above approaches has advantages and disadvantages. Training one humongous model has the advantage of getting to learn all complex cross dependencies, but it is harder to understand and harder to debug, and more prone to overfitting. It also requires all the data available in one shot, unlike building sub-models that could potentially operate on disparate datasets.

The various methods of extracting features from sub-models also have tradeoffs. Training sub-classifiers is useful because they are very interpretable (for example we could have a signal that represents the suspiciousness of text content alone), but in some cases, it is difficult to predict the attack target directly from a sub-domain of data. For example, purely a rare communication pattern is not sufficient to slice the space meaningfully to predict an attack. Similarly as discussed above, a pure content model cannot predict an attack without context regarding the communication pattern. The embeddings approach is good, but also finicky, it is important to vet your embeddings and not just trust they will work. Also, the embedding approach is more prone to overfitting or accidental label leakage.

Most importantly with all these approaches, it is crucial to think deeply about all the data going into models and also the actual distribution of outputs. Blindly trusting in the black box of ML is rarely a good idea. Careful modeling and feature engineering are necessary, especially when it comes to the inputs to each of the sub-models.

Our solution at Abnormal

As a fast-growing startup, we originally had a very small ML team which has been growing quickly over the past year. With the growth of the team, we also have adapted our approach to modeling, feature engineering, and training our classifiers. At first, it was easiest to just focus on one large model that combined features carefully engineered to solve subproblems. However, as we’ve added more team members it has become important to split the problem up into various components that can be developed simultaneously.

Our current solution is a combination of all the above approaches depending on the particular sub-model. We still use a large monolithic model as one signal, but our best models use a combination of inputs including embeddings representing an aspect of an email and prediction values from sub-classifiers (for example a suspicious URL score).

Combining models and managing feature dependencies and versioning is also difficult.

Takeaways for solving other ML problems

  1. Deeply understand your domain
  2. Carefully engineer features and sub-models, don’t trust black box ML
  3. Solving many sub-problems and combining them for a classifier works well, but don’t be dogmatic. Sure, embeddings may be the purest solution, but if it’s simpler to just create a sub-classifier or good set of features, start with that.
  4. Breaking up a problem also allows scaling a team. If multiple ML engineers are working on a single problem, they must necessarily focus on separate components.
  5. Modeling a problem as a combination of subproblems also helps with explainability. It’s easier to debug a text model than a giant multi-modal neural net.

But, there’s a ton more to do!

We need to figure out a more general pattern for developing good embeddings and better ways of modeling sub-parts of the problem, better data platforms, and feature engineering tools, and so much more. Attacks are constantly evolving and our client base is ever-growing leading to tons of new challenges every day. If these problems interest you, yes, we’re hiring!

--

--

Jeshua Bratman
Abnormal Security Engineering Blog

Founding engineer and Head of ML at Abnormal Security. I write about AI, ML, Data Science, and Cyber Security mixed with some comedy