Photo by Iuliyan Metodiev from Pexels

All too human

Peter Thomas
TNK2
13 min readJul 21, 2021

--

Peter Thomas, co-founder of Upling, writing with TNK2 co-founders Elston DSouza, Alix Kwak and Principal Researcher Jay Jeong

The difficult challenges of cybersecurity are human, not technological.

Scrolling Facebook one boring evening, Sarah finally succumbs to one of those “We’ll tell you your future!” type quizzes.

She’s seen loads of them and usually ignores them as being, quite frankly, a bit stupid. But this time, lured by the prospect of finding “Your ideal life partner!” she fills it in.

She happily reveals her pet's name, the name of the first street where she lived, the type of car she drives, her age, the name of her best friend—all great fun and, she thinks, totally harmless.

But of course, like tens of thousands of others, Sarah becomes a victim of a cyber scam.

The scammers, harvesting this data, merging it with some information about her from a previous data breach, and using sophisticated algorithms, deduce the username and password for her Paypal account and successfully spend a thousand dollars before she figures out what’s happened.

Sarah’s loss was relatively small — and maybe she gets the money refunded anyhow, but she will feel a sense of shame and embarrassment for a while.

It could have been worse, she tells herself.

Sarah, and the quiz she takes, are fictional — but the scenario is all too common.

Sarah’s story is just one small example of how cybercriminals exploit our vulnerabilities — to the tune of $851 million in 2020, of which investment scams accounted for the biggest losses at $328 million.

And that’s just individuals.

Australia’s small businesses lost a staggering $300 million to various forms of cybercrime in 2020, with 62% of small businesses reporting a cybersecurity incident.

It’s tempting to think that cybercrime is all high-tech — the equivalent of a Mission Impossible-style bank vault heist.

Of course, many cyber incidents are coordinated, professional and sophisticated high-tech attacks that target businesses, or even governments, often asking for a ransom to prevent the release of commercially sensitive or customer data or to get systems working again — such as the recent attack that shut off the distribution of gas along much of the East Coast of the United States or the slew of recent cyberattacks in the Australian education sector.

But the reality is much more mundane.

All of us, at home or at work, are like Sarah. Occasionally bored, sometimes distracted, often forgetful and sometimes failing to remember that there are people out there that will do us harm if we are not careful. We don’t change our passwords when we are prompted, don’t follow the updated guidance on online risks, don’t back up our machines securely — and a dozen other things we should do but don’t get around to doing.

It’s not the cybercriminal equivalent of Tom Cruise dangling from the ceiling that’s out to get us: it’s ourselves and our vulnerabilities.

It’s estimated that almost 40% of cyber incidents are caused by what are called ‘human factors’ — what we humans do every day — like Sarah, seeing the quiz as an innocent distraction and not an attempt to harvest personal data. What makes us vulnerable is that we are essentially trusting and mostly go along with what we are told. And that applies when we are at work just as much as it does when we are sitting on the sofa.

In the world of cybersecurity, it turns out that’s not the best way to be.

Years of research has documented the details of cyber incidents and even some of the reasons why we might fall for scams, misinformation or attempts to harvest our data and misuse it.

So for those almost 40% of incidents that are due to human factors, what can we do about it?

It seems that despite our best efforts, the number of cybersecurity incidents continues to rise. This is despite considerable investment in awareness campaigns by governments, by financial institutions and by employers, all of whom encourage us to pay more attention to the emails we click on, the passwords we use or where we store our data.

We believe that one of the problems may be that the advice we get — which on the whole, is sensible, well-intentioned and based on the latest knowledge about what the current risks are — is too general. It doesn’t match our individual situation — what we do, how we do it, and our needs.

If that’s the case, perhaps a different approach is needed. What if there were a way to figure out how vulnerable people are? And not in general, but for each individual?

And based on that knowledge, could we equip each individual with new skills and knowledge? Can we make people — with all their inattention, fallible memories and tendency to trust — less vulnerable?

That’s just what we are doing.

Humans are unusual creatures.

We are each unique: our experiences, our upbringing, our culture, what we pay attention to — and our motivations, desires and wishes — all add up to what we are—almost eight billion individuals.

But of course, we do have some commonalities.

While astrology and its 12 star signs can never capture the complexity of any one individual (although it’s lots of fun looking up our horoscope), and personality tests can only ever be very broad characterisations of who we are (perhaps the best known being the Myers-Briggs Type Indicator, or MBTI, with its 16 types), we do fall into some broad categories.

If we look at consumer behaviour, for example, marketers have realised that we fall into categories like ‘risk averse’ or ‘price conscious’ when we are thinking about buying things. (Of course, we may be both risk-averse and price-conscious at different times, just as we might move through stages of the customer journey — awareness, consideration and purchase — differently at different times).

What really matters is whether the categories are actually useful. While knowing that you are a Virgo is fun but isn’t remotely useful, while being an ENTJ doesn’t really tell us anything helpful when we negotiate our next salary raise, and we seem not to be so price-conscious when it’s that Hermés bag that we just have to have, perhaps there is a way to combine some of the broad categories we fall into with a more detailed understanding of who we are as an individual. Now that would be really useful.

We are interested in helping people to be less vulnerable online. So, can we figure out how to create a detailed understanding of each individual — who they are and what they do combined with some broad categories that are actually valuable in understanding what makes them more vulnerable? Can we go beyond the ‘one-size-fits-all’ general advice about how to stay safe online?

It’s not a simple problem.

For example, if I took a look at your iPhone and you at mine, we would have a few similar apps (maybe Snap, Messenger, WhatsApp), but we would have things that were different for each of us. And if we spent an hour or so watching each other use our laptops, we might see really different things happening. Maybe you have 30 windows open at once in three browsers, whereas I’m a one-window-in-Firefox kind of person.

Maybe you use the same password you have used for two years for everything (not, generally, a great idea); whereas I change passwords for everything each week or use a password manager. Maybe you have a job that involves you travelling, so you use your iPhone to do everything; I work from home and just use my phone for messages. Maybe you need to use 30 apps every day, but I only use one or two. Maybe my job involves lots of coordinating and facilitating, but you are more of an individual contributor.

And maybe we differ on lots of other attributes, too. Perhaps I’m more conscientious — more responsible, organized, hard-working, goal-directed and always observe rules — whereas you are more free-flowing, have lots of things on the go, maybe observe some rules but not others depending on what you think needs to be achieved.

So, to really address the human factors in cybersecurity — and get to the real ‘human’ in human factors — we need to define much more precisely what my vulnerabilities are as opposed to yours; and so provide personalised training, education and advice to both of us on how to stay safe.

We’ve developed something called the Behavioural Assessment Engine. It’s a pretty complex — and we think quite clever — set of algorithms that get at the human in human factors.

By figuring out what you use, what you do and how you do it, and understanding how you approach things, we can find out the specific ways you might be vulnerable. And while we do use some broad categories — similar to (but not the same as) those you might see in what are called The Big Five personality traits (openness, conscientiousness, extraversion, agreeableness and neuroticism) — it’s really the details of what you do as an individual that is important.

By applying these algorithms, we figure out what we call a dType — a set of cybersecurity factors that are unique to each person. The dTypes are Collaborator, Explorer, Communicator, Orchestrator and Creator. We find out what your dType is by asking you a set of specially-developed questions.

But as we know that people are not one-dimensional—and that we change as what we do changes — we say that you can be mostly Collaborator and a little bit of an Explorer, or you are both Communicator and Orchestrator in equal measure.

A dType: a set of cybersecurity factors that are unique to each person.

We then enhance your dType by doing some more sophisticated profiling using a set of simple online simulations that look at how you understand and approach a set of tasks. This then becomes your Digital Profile. The whole process takes minutes.

We put all of this together to identify not just your personal vulnerabilities but how vulnerable you are, and having done that, we can guide you to specific and personalised training and education resources designed to help you.

Of course, as you complete the training and as you understand more, you become less vulnerable, and so your Digital Profile changes because you become more knowledgeable, and you change your behaviour.

How do we know it works?

One reason is that we have analysed huge amounts of data on the human factors of cybersecurity. While this is our secret sauce, the Behavioural Assessment Engine embodies lots of research — including some we have done ourselves on complex cybersecurity, human factors and behavioural science — and the engine itself learns about correlations in data using machine learning (or ML) which mines out patterns from our data. The more data we collect, the more sophisticated the engine becomes.

You may be asking, so what?

If I just change my password all the time like everyone says (and don’t use my cat’s name or my birthday as my password), it’s fine. Don’t click on anything, lock away my laptop, and don’t tell anyone anything and it’s all good.

But life is now way too complicated for this to be the answer.

Remember Sarah and the quiz scam? The sophistication of phishing attacks (those intended to get you to reveal personal data) is increasing all the time. The hackers behind them are experts in the science of persuasion — they know what makes us tick and are determined to exploit it. Some of these scams are incredibly convincing, and those behind them already seem to know a lot about you — because they do, and use it to gain your trust.

Or think about all the possible vulnerabilities that you might be exposed to at work.

In many organisations, there are hundreds, if not thousands of, to take just one example, financial transactions — payments to suppliers, payroll, superannuation, foreign payments, refunds—and each of these could be a possible vulnerability. And while a lot can be achieved by implementing strict new processes and controls, the scammers are determined and resourceful in finding ways to get around them by gaining your trust and exploiting the human factor in cybersecurity. And it's not just financial transactions, but the many things we do each and every day — responding to queries from possible customers, authenticating ourselves to new systems or downloading content.

Also, there are now lots more ways we interact online— email, messaging, SMS, voice, collaboration tools, smart devices — and we do it all day, every day. We are bombarded with information from every direction.

For example, look at this seemingly harmless message offering something you see every day — a voucher.

Fake shopping voucher claiming to be from Australian retailer Woolworths.

The message contains lots of errors, but it looks just enough like a Woolworths’ voucher to be plausible, and it seems to be something that Woolworths might legitimately do.

But of course, it aims to collect your personal data. A moment’s inattention while you are shopping, or watching TV or just being distracted, and you’ve clicked. It’s not your fault — it was designed to get you to do that. And while the advice don’t click on anything that looks suspicious is good advice, suspicious things don’t come with a big hashtag saying #lookssuspicious.

We’re building the Behavioural Assessment Engine to understand the specific ways people are vulnerable, and we are doing it for lots of different industries and people.

For example, we think that our approach is especially useful for young people.

Children are increasingly living their lives online. Being able to participate fully in society now requires high levels of digital literacy — whether that’s for learning, consuming news, forming and maintaining social relationships or preparing for the world of work. Children are online earlier and earlier and, despite the efforts of governments, schools and teachers, most in primary and secondary education ill-equipped to use digital technologies safely.

Many of the digital literacy education programmes for children focus only on fear — often recommending that apps are banned or avoided — or offer generalised ‘spot a scam’ advice.

It doesn't work.

We believe that a more intelligent and evidence-based approach is needed that helps students develop strong practical digital literacy skills that will help them make the best of — while managing the hazards of — life online.

That’s why we are developing Upling, a digital literacy platform that uses our Behavioural Assessment Engine to deliver personalised cybersecurity training for Australian K12 students. Our aim is that our children become confident young citizens who are equipped and ready for whatever online life throws at them.

Another group is those who run small to medium enterprises (SMEs).

There are over 5 million Australian small business owners who are struggling to keep themselves safe. They are increasingly victims of cybersecurity incidents that threaten their businesses. They want to change, but they don’t know how to change- and just like Australian students, small business owners need personalised training and awareness, not expensive security software.

So we’re using our Behavioural Assessment Engine in our CyEd platform to help keep small business owners safe. Although the data we use is unique to SMEs — very different from that used in Upling, for example — the approach is just the same.

Of course, we can also apply the Behavioural Assessment Engine in other ways: for example, to help employees access targeted training before they are allowed to access certain systems; to help CISOs understand their organisations’ vulnerabilities; or to help insurers determine risk levels before they decide to set a premium for cyber-incident insurance.

Upling, CyEd and our other applications are all in closed beta right now.

That means we are testing them with our first customers, learning from those customers and changing as we learn. But both CyEd and Upling will appear soon for you, your school or organisation to try out — and understand what your digital strengths and vulnerabilities really are.

These are just some of the applications. But all of them are really about one thing: can we understand better the human factors of cybersecurity?

We believe we can.

Come back to the TNK2 publication to read more stories.

Photo by Andrea Piacquadio from Pexels

Class is not dismissed

We take a look at the education sector and particularly K12 schools. In the light of several recent high-profile security breaches, we look at why schools are so vulnerable and what we might do to change that. By Peter Thomas for TNK2.

Photo by Soumil Kumar from Pexels

Your money or your data: inside ransomware

Expensive, disruptive, and possibly disastrous. We look inside the disturbing, and rapidly-growing, ransomware phenomenon. By Alix Kwak for TNK2.

Photo by Tima Miroshnichenko from Pexels

The state of cybersecurity

We know that cyberattacks are on the rise. Cybercrime is up 600%. We go behind the numbers to look at trends and patterns — and the rise of cybercrime-as-a-service. By Elston DSouza, Alix Kwak, Peter Thomas and Jay Jeong for TNK2.

Photo by Ketut Subiyanto from Pexels

To err is human

We look at the science of errors, how it relates to cybersecurity and how unintentional actions make us less secure— from downloading a malware-infected attachment to failing to use a strong password. By Jay Jeong for TNK2.

And coming soon:

Photo by energepic.com from Pexels

Why small business is big business for the cybercriminals

A cybersecurity incident that impacts a small business can be devastating. Why are small businesses vulnerable? We look at some of the reasons — and what we might do about them. By Alix Kwak for TNK2.

You can learn more about our work by visiting tnk2.com.au and read more about our approach to the human factors of cybersecurity.

--

--

Peter Thomas
TNK2
Editor for

Inaugural director of FORWARD at RMIT University | Strategic advisor, QV Systems | Global Education Strategist, Conversation Design Institute | CEO, THEORICA.