Practical Human Security: Winning the Battle

By Anastasios Arampatzis and Justin Sherman

Part 3 of 3 in “Practical Human Security.”

“If you know the enemy and know yourself, you need not fear the result of a hundred battles.” ~ Sun Tzu, The Art of War

Introduction

The first article in our “Practical Human Security” series, “The Enemy,” focused on the external environment and the threats it introduces to the cyber landscape, while our second piece, “The Human,” discussed the heuristic biases and beliefs that shape human responses to these threats. In the final part of our series, we are going to discuss how “not [to] fear the result of a hundred battles,” or in other words, how to design security policies around — and for — the human.

Theoretical Background

Several decades ago, child development psychologist Jean Piaget stated that “the principle goal of education is to create men who are capable of doing new things, not simply of repeating what other generations have done — men who are creative, inventive and discoverers.” Building on this, we can view learning as a process of acquiring and building knowledge with strong social and experiential components.

Educational research has identified that people learn more effectively and deeply through engagement, motivation, cooperation and collaboration, and participation in real experiences; thus, conventional teaching methods cannot meet the learning requirements of today. Building and sharing knowledge proves to be quite advantageous for effective teaching and curriculum development, a far cry from bureaucratic styles of education that value quantity over quality, and look to inherent motivation. Through techniques and methodologies such as open discussion forums and hands-on exercises, people in small groups may develop critical thinking, learn to mobilize toward common goals, and rely on a collective intelligence that’s superior to the sum of each individual.

In accordance with Bloom’s taxonomy, though, teaching activities should not just focus on transmitting information; they should also focus on application. Leveraging old information to solve a new and challenging problem is essential to fostering retention and developing new knowledge. Perhaps predictably, this quickly becomes cyclical — with old knowledge reinforced through application, and application yielding new knowledge to be applied, and so on and so forth. The byproduct of this process is often referred to as deeper learning. Gamification and simulation are just two examples of implementing this deep learning process.

Defaults and “Nudges”

Economists Richard Thaler and Cass Sunstein outlined, in their 2009 book Nudge, the idea of libertarian paternalism — a decision-modeling framework where nobody has their choices altered or limited (the libertarian element), but by framing choices in a certain way, decision creators can help people pick the best option (the paternalism element). In essence, the idea is to nudge individuals in the right direction without restricting their freedoms. There are many ways to achieve this “nudging,” including reordering options and increasing the amount of available background information, but we want to focus on one method in particular: changing the default.

Defaults are incredibly powerful when it comes to decision-making; the decision science and behavioral economics studies on this are plentiful. Because of status quo bias — essentially, our aversion to putting effort into change — most of us are likely to stick with the default option in any given decision scenario. Nudge specifically shows this to be true with everything from college dining hall buffets to corporate 401K plans. For these reasons, security-by-default is one of the most effective ways to “win the battle” when it comes to practical human security. Making cyber safety the status quo will all but guarantee overall more secure behavior, because most tech users will just stick with that default.

This idea of “defaults” has many implications for how organizations design, execute, and reinforce security training, but that will be addressed in the next section; for now, we’re going to focus on how organizations can institute security-by-default in technology itself.

Implement the strongest possible encryption on all devices you buy for your organization, be they smartphones, laptops, or IoT sensors. Install, and set as the default, encrypted communication software — from Signal with its perfect forward secrecy to PGP-secured email applications. Restrict Internet access (e.g. to work-only websites) and ensure all accounts, by default, have a minimum level of access required to perform basic tasks (e.g. prevent software installations). Enable email filtering, mandate multi-factor authentication, and set baseline password requirements. Configure malware removal software, internal and external firewalls, and automatic account “lockdowns” after a certain period of inactivity. Constantly monitor new industry guidelines and cutting-edge research to adapt these security defaults — for instance, what constitutes a strong password. And overall, stop employees from dealing with complicated and undesirable issues of distrust whenever possible; if they’re going to be annoyed when you ask them to double-check their personal USB hasn’t been hacked, then don’t let them plug it in to begin with. When security is the default, more employees and users will almost automatically become more secure.

It’s also important to understand that just because some of these cybersecurity elements are “defaults” doesn’t mean they should be presented as options in the first place. Largely, they should not. Encryption and multi-factor authentication are two perfect examples of a human-side security factor that shouldn’t have an “opt-out.” Just as it shouldn’t be an option for employees to disable or weaken encryption, it shouldn’t be an option to only use single-factor authentication (i.e. just a username and password). When it comes to security, defaults without the libertarian element are often best.

Decision Heuristics, Feedback Loops, and Security Training

As we discussed in the first part of our series, security training and education are currently inadequate for addressing environmental cyberspace threats, as well as the heuristic and cognitive biases that guide our cyber behavior. Even with security-by-design, as addressed in the last section, we still aren’t protecting for situations in which (a) organizations can’t make security the default, because control inherently lies with the human, and (b) humans change that default, becoming less secure in the process.

The classic (and currently pervasive) solution to this problem is to impose a clear corporate security policy — for instance, that users cannot use portable USB storage devices. On its face this seems effective, as the structure of the organization inherently incentivizes compliance with these policies…right? Wrong. From a human perspective, this is likely to fail for many reasons:

  • Users will disrespect the policy because they don’t understand the risk involved.
  • They may not be aware of the policy, or they may even forget the policy.
  • Environmental situations may arise where the users have to use a removable storage device, so the employees will make functional exceptions (convenience over security).
  • Humans are prone to optimism bias — thinking we’re better at certain behaviors than others (e.g. cybersecurity) — and will thus exempt themselves from secure behavior even when functionality or convenience isn’t directly part of the equation.
  • If employees violate the policy once and there are no negative consequences, they will likely do so again.

And this is without even addressing many other issues with current security training, which were discussed in our previous piece.

The most reliable way to prevent the risk, then, is to take the user out of the equation. But this is equal to amputating an aching arm. We cannot divorce humans from technology, or technology from humans — so while this works in theory, it’s unacceptable in practice.

So the question remains: what about the underlying issues of risk perception and diffusion of responsibility (from which many other risks arise)? In these cases, it’s necessary to raise user awareness of security issues and actively engage them in the security process, without creating an environment of paranoia. In short: it’s about designing security training and security policies for the human.

Awareness and training programs are important mechanisms for disseminating security information across an organization. They aim at stimulating security behaviors, motivating stakeholders to recognize security concerns, and educating them to respond accordingly. Since security awareness and training are not only driven by requirements internal to the organization, but also by external mandates, they must align with regulatory and contractual compliance drivers as well. Current literature and guidelines such as ENISA and NIST additionally emphasize alignment with business needs, IT architecture, and workplace culture.

Target participants of awareness programs include senior management, technical personnel, employees, and third parties employed by the organization (e.g., contractors, vendors, etc.). Awareness programs are essential because organizations need to ensure that stakeholders understand and comply with security policies and procedures; they also need to follow specific rules for the systems and applications to which they have access. However, as we explained previously, stakeholders’ behavior is influenced by individual traits and biases which impact their compliance with security policies and procedures. Thus, security awareness must be designed to tackle beliefs, attitudes, and biases.

Designers of security systems should consider adopting the systems approach to training, considered an effective education practice in the field of human factors and ergonomics.

Central to this approach is identifying participants’ cultural biases, which can facilitate needs assessments and provide an alternative criterion for grouping program participants. Because individuals’ cultural biases influence their perception and decision-making calculus, they also affect an individual’s risk assessment. This goes unaddressed in most contemporary security training programs, which is immensely problematic for how employees individually frame their knowledge after the session concludes. Without a relevant cultural framing (and this culture can take many dimensions), employees will fail to fully understand why security is so important.

Thankfully, framing cybersecurity in light of cultural biases can be done without expending significant additional resources. For instance, while convenience is heavily prioritized in technology, there are many cases in which users find a system’s aesthetics to be far more important. It is therefore possible for employees to value security over convenience — it’s just about making them understand why they should in the first place. Understanding where groups of employees are coming from (e.g. does their job value convenience, collaboration, speed, etc.) will help frame security’s relevance in the correct light; we might, for example, find that a litigation team best understands security in the context of risk avoidance, whereas an accounting team best understands security in the context of the confidentiality, integrity, and authenticity of data. Thus, it’s essential to design security policies with cultural biases in mind. In addition to creating and selecting culturally-relevant training materials and simulation exercises, it’s important to back this up with a strong corporate security culture.

Along a similar vein, organizations must build strong feedback loops during and after security training; with weak feedback loops — meaning pro-security choices don’t yield any visible rewards (other than the unspoken “congrats, you didn’t get hacked!”) — employees are not behaviorally conditioned or incentivized for safe and secure cyber behavior. During training, the best source of guidance is past “success stories” in which security controls prevented security incidents, smart behavior blocked social engineering attacks, and clear reporting procedures resulted in the quick trapping and containment of an active breach. Post-training, techniques such as randomly spotlighting employees for smart security practices will further solidify feedback loops that promote cyber-secure behavior. (This specific example of using intermittent rewards is also extremely effective in conditioning.)

Implementing simulations and gamification during training will then strengthen these existing feedback loops. Every time we can link secure cyber behavior with increased reward — even if it’s in a “fake” environment — we can shape smarter behavior in the workplace. If employees experience the value of screening an email during a simulation (e.g. preventing a phishing attack from a foreign competitor), then they’re more likely to scrutinize suspicious messages in real life. This is because self-realization and application, as previously referenced, are incredibly important for knowledge retention and re-application.

Closely linked with strong feedback loops is positive association. Research on cognitive biases has identified that individual judgments are affected by exposure to positive or negative stimuli (e.g. smiling or frowning face), which decision scientists refer to as affect bias — our quick emotional reaction to a given stimulus. Thus, associating security messages with positive images (i.e. happy customers means more profit) is quite effective for ensuring users’ compliance with your security policies. Rewarding strong performance on security tests (whether scheduled or “spontaneous” — e.g. sending employees phishing emails) will also help achieve this end.

Anchoring bias, or our tendency to rely on the first piece of information presented on a topic, also heavily influences attitudes towards new security practices. If employees are told that strong passwords have at least six characters, for example, they’re likely to just use six characters and not opt for any stronger; they won’t deviate from this anchoring information. This has implications from email scrutinization all the way to online browsing behavior. Similarly, us humans are prone to frequency bias, or prioritizing issues about which we have more information, and recency bias, or prioritizing issues about which we’ve been educated most recently. If an employee is trained for five hours on password creation but only for three on phishing attacks, then they will pay greater attention to the former (despite the latter being a greater and more complicated threat).

Since our brains rely heavily on the order and frequency with which information is presented, we need to design security policies for these tendencies. To design for anchoring bias, we should start out a topic by providing the strongest and most effective security practices (e.g. say passwords should be length 12 instead of 6); to design for frequency bias, it’s imperative we try and balance the time spent on a topic with its importance (e.g. spending the most time on social engineering threats); and to design for recency bias, we should end security training (and security retraining) by covering the most prevalent threats.

Continuing with the order and timing of information: humans tend to attribute greater value to short-term costs and benefits than long-term ones. In other words, security experts should emphasize not only the long-term and macro-level benefits of secure cyber behavior (e.g. better growth) but also the immediate, short-term benefits. We only need turn on the news to see a plethora of examples for this emphasis, from avoiding massive monetary loss to preventing a legal and PR nightmare. Instant costs will resonate effectively with us humans.

We already discussed positively reinforcing secure behavior, but it’s also (obviously) critical to punish violations of security policies. Having a corporate security policy that is not monitored or enforced is tantamount to having laws but no police. Organizations must monitor employee behavior — in addition to the behavior of those doing the monitoring — and act when rules are broken. This connects back to strong feedback loops and the idea of humans favoring the immediate effects of our actions: the best deterrent to breaking the rules is not the severity of consequences but the likelihood of being caught.

A final consideration to take into account is how to reduce the human cost of implementing security. This encompasses many of the ideas in our series, from security-by-default on the technology side to effective designing of security training, framing of cybersecurity issues, and conditioning of secure cyber behavior on the human side.

Evaluation of training programs is necessary to ensure they’re effective. To evaluate a program, measures of successful learning such as retention of information and usability should be examined. If a training program is deemed ineffective, a new needs assessment should be conducted and new training techniques should be considered during an iterative process (design, test, redesign, test, etc.).

Unfortunately, the aforementioned practices alone are not enough to totally win the battle; despite the title of our piece, presuming to be “victorious” in the truest sense of the word would be delusional. Security awareness must be a nationwide strategic goal. It requires a holistic approach, from governments, policymakers, and tech leaders to citizens, consumers, and students. Security awareness programs must be carefully designed to run through the backbone of our society and should become an integral part of our educational system. Curricula should not focus only on programming or technical literacy but also on cybersecurity literacy; we need to build a cyber lexicon and a common framework to understand cyber behavior. There’s still much to be done.

Conclusions

Peggy Ertmer argues that changing one’s attitude is a hard thing to do but can be achieved through practice, cultural support, and challenging beliefs through community. There’s a long path to follow until we reach a safer cyber environment, much like the path of Areti (Virtue) in the labors of Hercules: narrow and full of difficulties in the beginning, but wide like an avenue at the end. In the military they say that if you want peace, you have to prepare for war.

Considering our series and its ideas in their entirety, this is exactly what we have to do. If we want to change the security culture of our society, we need, as Dr. Mary Aiken says, to stop, disconnect, and reflect. We need to remember the human.

References

“The Cyber Effect” by Dr. Mary Aiken.

“ICT in Collaborative Learning in the Classrooms of Primary and Secondary Education” by Ana García-Valcárcel, Verónica Basilotta, and Camino López.

The Psychology of Security by Ryan West, published in April 2008, Communications of the ACM.

“Analyzing the role of Cognitive and Cultural Biases in the Internalization of Information Security Policies: Recommendations for Information Security Awareness Programs” by Tsohou, Karyda, and Kokolakis.

“How People Learn” from the National Academies Press.

“Nudge: Improving Decisions about Health, Wealth, and Happiness” by Richard Thaler and Cass Sunstein.