Photo by Ketut Subiyanto from Pexels

To err is human

Alix Kwak
TNK2
Published in
12 min readOct 5, 2021

--

We look at the science of errors, how it relates to cybersecurity and how our errors make us less secure — from downloading a malware-infected attachment to failing to use a strong password. By TNK2 co-founders Alix Kwak, Jay Jeong and Elston DSouza and Upling co-founder Peter Thomas.

"To err is human…' wrote Alexander Pope (1688–1744) in An Essay on Criticism.

Making mistakes is part of the human condition, observes Pope. As long as there have been ways to do something, there have been ways not to do it properly.

And like most things human, there's a whole science behind it.

Human error has been studied for over a century. Perhaps the best-known example is the 'time and motion' studies used to streamline industrial processes — an approach later used to study the work of surgeons.

In the mid 20th century, the term 'human factors' arrived. This is the study of how people and technology interact and focuses on finding ways to make technology easier to use and so reduce the possibility and impact of human error.

One well-known example of the impact of human error is the 1979 Three Mile Island accident, in which a partial meltdown of a nuclear reactor caused a radiation leak, was caused, as the President's Commission investigating the accident said, by "people-related problems and not equipment problems".

Another is how $4 million dollars worth of damage was caused to the Presidental Boeing 747–200 Air Force One when a mechanic mistakenly gave contaminated parts to a second mechanic, who noticed the problem and a third tried to clean them using an unauthorised procedure. Two of the factors in this incident were, as the US Air Force said, "a failure to observe explicit warnings concerning cleanliness" and a failure to "absorb or retain […] training and […] to apply cleanliness procedures."

Studies of human error have created models such as HEART (Human Error Assessment and Reduction Technique), THERP (Technique for Human Error Rate Prediction), and TESEO (Tecnica Empirica Stima Errori Operatori, or 'The Empirical Technique to Estimate the Operator's Error'). All of these try to get at the why, what and how of the errors we make.

So what is an error?

Although there are lots of different interpretations of what 'error' means, they all share a common thread: an action that has negative consequences or fails to achieve the desired outcome. For example:

Error means that something has been done which was: not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits. John Senders and Neville Moray.

Either an action that is not intended or desired by the human or a failure on the part of the human to perform a prescribed action within specified limits of accuracy, sequence, or time that fails to produce the expected result and has led or has the potential to lead to an unwanted consequence.”— NASA.

Research has suggested that there are several different types of errors.

These include memory lapses (where someone simply forgot to do an action), slips (where someone doesn't have enough skill to do an action), mistakes (where someone doesn't have enough knowledge to complete an action or where someone picks the wrong rule or fails to follow a rule), and violations (where a short cut is taken based on overconfidence that leads to a poor outcome).

In 2018, a small leak was detected in a Soyuz transport capsule docked to the International Space Station. The 2mm hole, which resulted in temporary depressurisation, might have been caused, as Dmitry Rogozin, head of the Russian space agency, said, "by a faltering hand … it is a technological error by a specialist. It was done by a human hand — there are traces of a drill sliding along the surface." The Soyuz is the only spacecraft able to take astronauts to the space station — and bring them home again.

And, of course, sometimes an error might come from not doing something. Analysis of the Boeing 737 Max accidents eventually revealed that they resulted from a series of errors in design and production that led to pilots making errors in recovering from a dangerous dive caused by a new software system.

Whether it's in the context of spaceflight, medicine, driving, commercial flying or industrial processes, understanding human error is both fascinating and — as in the case of the Three-Mile Island, Boeing 737 Max and Soyuz examples — incredibly important.

One of the places where errors are becoming more important is on the internet.

It might be opening up an unknown file or email attachment, re-using the same username and password across lots of sites or ignoring that software update reminder that has been popping up every time you restart your computer. All of these are about human error.

Accenture says in their Ninth Annual Cost of Cybercrime Study:

Cyber criminals are adapting their attack methods. They are targeting the human layer — the weakest link in cyber defense — through increased ransomware and phishing and social engineering attacks as a path to entry.

Accenture's Ninth Annual Cost of Cybercrime Study

Research says that anything up to 95% of successful cyberattacks are caused by human error (88% according to a study by Verdict, 85% according to Verizon, 95% according to Cybint), so it might be worth understanding and finding ways to lessen the impacts of the kinds of errors we make online.

So what kinds of human errors — which we have defined as "an action that has negative consequences or fails to achieve the desired outcome" — are involved in making us much more vulnerable online? There are three types.

The first is skills-based errors. We saw two types earlier — slips and memory lapses. These often happen when people are doing routine things, especially when their attention is diverted. In these skills-based errors, people usually have the right knowledge to do a task — and may have done it many times before. Even experienced people make skills-based errors, and sometimes experts make these errors because they can do these activities almost unconsciously, making them more susceptible to mind wandering.

An example of a skills-based error, in this case, a memory lapse, might be forgetting to back up desktop data from their computer. The person knows they should do it and knows how to do it (because they have done it before), but due to one of a thousand reasons (they need to get home early, forgot when they did it last, had lots of emails to respond to) they don't.

An example of a slip, another skills-based error, might be a support desk member resetting a user's password during a phone call but missing a step to send a 2FA notification as an identity check before doing the reset. Other slips happen when people fail to apply what might be called security common sense — doing things that they know better than to do, such as double-clicking on an odd-looking .exe file in an email or sharing their password with someone else.

The second type of human error is mistakes. Often people with less experience make mistakes, and they can be of two kinds: knowledge-based and rule-based.

Many knowledge-based mistakes are the result of someone doing an activity by trial and error. Lacking any actual knowledge, they might give it a try and see what happens. An example is clicking on a link without knowing what will happen or proceeding anyway when faced with this message:

The other kind of mistake is a rule-based mistake. Here, people disregard a rule and experience a negative consequence from doing so. For example, an office environment may have a rule to prevent unauthorised access that says always log out of your machine when leaving it unattended for an extended period. (Of course, rules apply differently in different contexts: you probably don't need to obey the log out of your machine rule at home — although those who are extremely security conscious might prefer to do just that).

One of the most common rule-based mistakes is the failure to change passwords frequently or re-use passwords across many sites. Many systems used to enforce password changes (Microsoft now call enforced changes an "ancient and obsolete mitigation of very low value"), but now this is becoming obsolete in favour of getting people to create more complex passwords.

Neglecting prompts to change your password or not to use the same password on different sites can have significant negative consequences: the recent fuel shortage on the US east coast (and a $4.4 million ransom cost) in the Colonial Pipeline cyberattack may have been the result of a Colonial employee using the same password as another account that was previously hacked.

As we have said before, Hackers are experts in the science of persuasion — they know what makes us tick and are determined to exploit it.

The term for this is social engineering — intentionally manipulating people to provide confidential information, allow access to accounts or services or send money. Hackers using social engineering rely on the fact that we make errors — and encourage us to make them.

Here's an example you may have seen. A 17-year-old teenager from Florida was charged for a recent Twitter hack that enabled him to gain access to the Twitter accounts of Joe Biden, Bill Gates, Elon Musk, Barack Obama, Apple and other prominent people in a cryptocurrency scam.

Former US President Barack Obama's hacked Twitter account.

Twitter commented:

The attack on July 15, 2020, targeted a small number of employees through a phone spear phishing attack. This attack relied on a significant and concerted attempt to mislead certain employees and exploit human vulnerabilities to gain access to our internal systems.

Most organisations have cybersecurity education programmes that aim to lessen the chance of error through education and training. The market for cybersecurity education is expected to reach $10 billion by 2027.

But however much we train and educate, there's no absolute guarantee of eliminating error— and, of course, some training is intended just to comply with regulatory requirements.

As we said in our previous story, new types of cybercrime are emerging, and criminals are becoming more and more effective in finding new ways to manipulate us. In what is a brilliant move, fraudsters used the brand KnowBe4 — a cybersecurity company that offers security awareness training for organisations — to gain recipients' trust, along with their Microsoft Outlook credentials and other personally identifiable information.

A great deal of cybersecurity training and education is likely to be ineffective because it's not a lack of knowledge that makes us vulnerable but our habits and behaviours. Those habits and behaviours are rooted in the kinds of skills-based errors and mistakes we saw earlier.

Think about the issue of trying to get people to create stronger passwords.

Although 91% of people understand the risk of using the same (or variations of the same) password, and understand the need for strong passwords, research by the UK's National Cyber Security Center (NCSC) says that 23.2 million accounts globally use simple passwords like 123456, 123456789 or include the word password, 1111111, or qwerty. People also routinely use the same password for work and personal accounts, write passwords down on paper and don't change their passwords — often for years.

None of this is about a lack of understanding — after all, we are told over and over again to change our password frequently and not to use dictionary words (or the name of our cat, or our mother's maiden name, or our year of birth). But yet we make this same kind of mistake over and over again — until we are finally hacked.

So what can we do if the education and training we have access to doesn't work? Given the types of errors we make, is there a better and more effective way to help people be safer online?

Here are our observations based on the work we and others have done.

One thing to do is to minimise the cognitive load associated with cybersecurity education and training.

Often, cybersecurity training is long, repetitive and one-size-fits-all. We click through and see the same advice we've seen before, making it less likely we will pay attention to it — and less likely that it will encourage us to change what we do. The alternative is to build education and training that is personalised, based on an assessment of our current cybersecurity strengths and weaknesses. Personalised education and training is more likely to be effective because it addresses our individual, specific vulnerabilities. To deliver personalised education and training, we need to determine those specific vulnerabilities through diagnosis and profiling.

Another is to focus on not just not specific risks but on instilling a positive cybersecurity attitude. For example, we tend to segregate our lives into work and home, or school and home, rather than seeing that it's the totality of our habits and behaviours that matter. The chances are that what we learn in work-oriented cybersecurity training and education — if we learn anything at all — doesn't carry over to the rest of our lives. Even if we think about cybersecurity at work (and most people are just too busy to think about it), even fewer people think about it when they are setting up their home router, their home wifi network or using their mobiles every day. We have become a quick-to-click culture.

A final suggestion is to think about all of this as a long term process. Changing attitudes and behaviours isn't a one-time, set and forget activity. Like many of the areas where we have achieved lasting behavioural change, for example, the UK's Clunk Click Every Trip seatbelt campaign or Australia's Cancer Council Slip, Slop, Slap, Seek, Slide campaign — one of the most successful health campaigns in Australia's history — it requires a focus on the most critical behaviours and a long term focus.

But to help us we have lots of research findings from behavioural science —such as the concept of nudge, popularised by Richard Thaler and Cass Sunstein in their book Nudge: Improving Decisions about Health, Wealth, and Happiness. The idea behind nudge theory is to make it easier for people to make better choices by shaping their environment. An example might be a personalised app that helps people make healthier lifestyle choices by indicating that if they have a medical condition such as diabetes, choosing reduced sugar products may be better for them.

We also have some new approaches to education and training, such as microlearning, which makes it easier for people to absorb content quickly, at any time and in a way that reduces cognitive load, and gamification, which offers immediate and positive rewards for making positive behaviour choices.

Only by combining these are we likely to reduce the number of errors, making us all safer online.

At TNK2, we focus on the human factors of cybersecurity.

Through CyEd and Upling products, we provide personalised cybersecurity awareness and training and digital literacy education based on behavioural science, nudge theory, gamification and microlearning.

Our proprietary software that sits behind CyEd and Upling, the Behavioural Assessment Engine, considers all of those factors and more to help identify vulnerabilities and build more effective knowledge, attitudes and behaviours. To learn more, visit us at

https://tnk2.com.au.

Come back to the TNK2 publication to read more stories.

Photo by Iuliyan Metodiev from Pexels

All too human

An overview of why the difficult challenges of cybersecurity are human, not technological. Peter Thomas, co-founder of Upling, writing with TNK2 co-founders Elston DSouza, Alix Kwak and Principal Researcher Jay Jeong.

Photo by Andrea Piacquadio from Pexels

Class is not dismissed

We take a look at the education sector and particularly K12 schools. In the light of several recent high-profile security breaches, we look at why schools are so vulnerable and what we might do to change that. By Peter Thomas for TNK2.

Photo by Soumil Kumar from Pexels

Your money or your data: inside ransomware

Expensive, disruptive, and possibly disastrous. We look inside the disturbing and rapidly-growing ransomware phenomenon. By Alix Kwak for TNK2.

Photo by Tima Miroshnichenko from Pexels

The state of cybersecurity

We know that cyberattacks are on the rise. Cybercrime is up 600%. We go behind the numbers to look at trends and patterns — and the rise of cybercrime-as-a-service. By Elston DSouza, Alix Kwak, Peter Thomas and Jay Jeong for TNK2.

And coming soon:

Photo by energepic.com from Pexels

Why small business is big business for cybercriminals

A cybersecurity incident that impacts a small business can be devastating. Why are small businesses vulnerable? We look at some of the reasons — and what we might do about them.

You can learn more about our work by visiting tnk2.com.au and read more about our approach to the human factors of cybersecurity.

--

--