Part 1: Why simulated phishing campaigns fail, and how to make sure yours doesn’t.

Developing people-centric phishing campaigns

Joe Giddens
People. Security.
7 min readMay 9, 2020

--

“Officer, can you help me, I think I’m in trouble.”

Before working in security, I was a detective. I worked in the Metropolitan Police Cybercrime Unit. Part of my role (the best part) was visiting businesses to speak with people about cybercrime.

I recall a conversation I had with a lady from a large, well-known bank. Let’s call the lady Gill (not her real name). Gill was a legal professional in her 50’s.

I’d just finished a talk about online safety when Gill approached me. She looked agitated. She was hunched over. And she was clutching a Blackberry in her hands.

She thanked me for the talk and asked if it’d be okay to show me an email. This type of thing was common. I was always happy to oblige.

She handed me the phone and explained she’d been sent an email telling her there was a virus on her computer. The email said if Gill didn’t download an ‘antivirus update’, a non-conformance note would be sent to the HR department.

She’d gotten the email the day before. She hadn’t replied. She was too scared to tell anyone because she thought the virus was her fault. She’d tried to click the ‘update’ link several times but was sent to an error page.

Gill told me she hadn’t slept that night because she was worried what might happen.

A quick WHOIS lookup suggested the email was sent from a simulated phishing provider, presumably a vendor to the bank. I told Gill I thought the email was a training exercise, and advised her to report or forward it to the security department.

The relief on her face was something I’ll never forget.

The conversation with Gill got me thinking. I wondered what type of person would send an email like that as a training exercise?!

I then thought about the influence security professionals have. They have the ability to reach out to people. They can cause panic, elation, wonder or agony.

It’s serious power. If misused it causes well-meaning, hard working people like Gill to lay awake at night worrying. And that ain’t cool.

In a previous article on why security behaviour change campaigns fail, I cited the example of phishing. I touched on how it might be better used to understand the emotional drivers that cause people to click.

I wanted to take a more detailed look at the subject, explore the reasons why most simulated phishing campaigns fail, and suggest a more human-centred approach.

I hope you find it useful.

Why campaigns fail

Simulated phishing campaigns fail because they’re run the wrong way. This is usually because the person in charge doesn’t stop to think about the effects (both positive and negative) their actions may have.

I would consider a campaign to have failed if any of the following outcomes are met:

  • Increased resentment or mistrust of the security team
  • Reduced engagement in cyber security across the organisation
  • Wasted work time (as people deliberate over each and every email they receive)

Does this mean we should stop running simulated phishing campaigns? Many think so, favouring a 100% focus on technical countermeasures. Whilst I agree technical controls should form part of any layered defence, I disagree about leaving people out entirely.

When conducted thoughtfully, with clear goals in mind, simulated phishing campaigns provide overwhelming benefits to your organisation.

So, before we look at what we should be doing, let’s take some time to understand why most campaigns fail.

1) They rely too much (or only) on click rates

Click rates. The go-to measure of “success” for many security awareness professionals.

Click rates are used to gauge (though it’s closer to guessing) how likely people are to click on phishing emails.

They’re seen as useful for two reasons:

  • First, they provide some indication of organisational vulnerability. The thought being, “the lower my click rate percentage, the less vulnerable we are.” This is kind-of-right. But it’s lazy. Phishing emails can (and should) be used to measure much more than indicative vulnerability.
  • Second, they show the “effectiveness” of user awareness training and/or other security activities. You know, run a phishing campaign, provide training, run another campaign, hopefully see improvement.

So click rates have some use. Issues arise when they’re relied on too much, or when they are the only thing measured. This happens all too often.

The reason this is a bad thing to do is because anyone can be phished. Anyone.

It just takes the right email, at the right time, in the right situation. Click rates do not account for these outliers.

Also, they don’t tell us why people click. To really understand phishing vulnerability, we should be asking “why?”

2) They’re run in secret

How would you feel if you learned your organisation had put you under surveillance for two months, just to see “how you were doing?”

Angry? Demotivated? Let down? That’s how people feel when you run phishing campaigns without telling them.

“I get that Joe. But we want a true reflection of our risk. If we tell people, they’ll just be on the lookout!”

Wrong.

Is it not better for people to think they’re always being phished? Because they are, by real criminals. This is the mindset that best reflects reality.

By not telling people about simulated phishing you mislead them into a false sense of security. Meaning they’ll be less likely on the lookout, increasing your real risk.

Secrecy creates mistrust. You and your team will become the enemy. People will engage with you less. They won’t report security incidents when they happen, or come to you for advice. You might even find yourself invited to the pub less often. That’s just sad for everyone.

3) They are too realistic

This one seems a little counter intuitive, but bare with me.

Ultra-realistic phishing templates only prove one thing. People click on emails.

We already know this.

If someone is expecting an Amazon delivery (a lot of people are), and you send them a phishing email looking like it’s from Amazon, they’ll click on it. All you’ve proved is you’re good at designing phishing emails.

Phishing is not an exercise to try and catch people out. If that’s your approach, you shouldn’t be in this profession.

Ultra-realistic emails can serve a purpose. They should be reserved for your organisation’s security elites. For everyone else, obvious-to-spot links and “virus.exe” attachments are better at teaching people to recognise signs of phishing.

Over time these actions will develop into habit. Only then should you think about increasing the difficulty.

4) They’re too short

Remember early antivirus programs? They were primitive. Each time you wanted to run a scan, you had to press a button. Nowadays things are different. Modern endpoint protection programs are always on, working away in the background, collecting data, learning, adapting.

Why should simulated phishing campaigns be any different?

If a campaign is too short or doesn’t reach everyone in the organisation, you’ll never collect enough meaningful data. If you don’t run campaigns frequently enough, you won’t be able to adapt to shifting criminal tactics.

In this respect, one-and-done campaigns should be considered a failure. The planning and preparation time outweighs the overall usefulness. This is time you and your team could be spending doing more useful things.

We need to get comfortable always phishing our people. Criminals are.

5) They are used as a tool to assign training

The gun-to-the-head approach to behaviour change.

Train, test, analyse. Train, test, analyse.

Most people in our industry now realise this is a bad idea. It causes people to associate “failing” a phishing test with training. It makes training feel like punishment.

Punishment (or the sensation of punishment) does nothing to increase people’s ability to recognise or report future attacks.

Training should not feel like punishment. Training should spark excitement as people learn new information and skills. If training is mandated as a result of accidentally clicking on a phishing email (sent by some muppet in security), its effectiveness will be reduced.

The episode with Gill is an example of a short-sighted security control.

It had been drawn up with one thing in mind — “How will this benefit our organisation?”

No one had thought about the impact it might have on the workforce. No one had considered it might do more harm than good.

No one had thought about Gill.

Part 2 will explore a new approach to simulated phishing. The approach is more methodical, more people-centric, and it benefits both the organisation and its users.

At CybSafe, we obsess about this stuff. We are determined to make a difference. You can read more about our work here.

If you would like to submit an article for publication, please get in touch. The best way to reach me is LinkedIn.

--

--