Understanding End User Security Decisions

A look at mental models and risk communication to better understand the security decisions end users make, why they make them, and how to guide them in their decision-making process.

Introduction

Developers are faced with a difficult task. Design an application that is usable for a wide range of end users, with the expectation that these end users do not have the same knowledge as the developer. An area of particular significance is computer security. The average user is unlikely to understand the intricacies of TLS or botnets. Additionally, the user is rarely focused on security; it is almost always a secondary thought to the primary purpose of the software. While end users can sometimes make seemingly irrational decisions, they are, in actuality, calculated decisions based on limited knowledge.

A common joke made by developers about end users.

Despite a lack of knowledge, it is important that a user’s account is protected, and inevitably, some of that protection falls on them to do. How then can a programmer ensure users make secure decisions? This is by no means an easy question to answer. The research field of usable security and privacy has been continuously growing in an effort to solve this question.

This post looks at two connected areas in usable security and privacy: user mental models and risk communication. The goal of looking at these areas is to help uncover what information users are missing, and how their existing knowledge, along with risk communication, guides their decision making.

User’s Mental Models

A mental model is a user’s thought process for describing and explaining how something works in the real world. Mental models can be very useful for understand end users’ security practices. They often reveal gaps in knowledge or other limiting factors that can cause users to not see a particular attack vector.

As an introduction, one paper [1] explored mental models of the internet. The paper began with having subjects draw diagrams for the internet. They then asked a variety of questions about subject’s online data and where it goes.

They found that non-technical participants often represented the internet with a simple system wherein the participant connects to a central database. The diagrams typically contained only organizations the participant directly interacted with on a regular basis. These included companies like Google or Facebook, but not entities such as the participant’s ISP.

One user’s drawing of the internet. Credit: [1] Kang et al.

When asked about where their data goes, non-technical people listed less places as opposed to technical users. Of note, the non-technical answers were more vague, with answers such as ‘whoever tries to make money off of you’. While most subjects understood that the services they use have access to their data, there was an alarming misconception. They found that, “A few [subjects] were not sure if information would be stored permanently, using the evidence of having seen webpages removed.” This is an excellent example of users basing their mental model on something familiar. While they understand websites can change and be removed, they wrongly extend this idea to data storage on other websites.

This study shows how users’ mental models are limited to things that they have knowledge of. While this is a somewhat obvious statement, it is important to consider when designing for users. Another potential concern can be gleaned from this study: how to add information to an existing mental model. If the user who drew the diagram above was taught about ISPs or the DNS, where would these fit in? It is unlikely that an explanation would lead to them being correctly added to the model. Therefore, it is important to consider a user’s existing mental model of something before attempting to add to it.

In another look at mental models, Rick Wash [2] explored folk models for home computer security threats. The term ‘folk models’ is used to draw similarity to folk tales, stories that are passed around culturally and typically contain inaccuracies. In total, he found eight distinct folk models throughout his interviews. These models fell into two distinct categories of viruses and hackers.

Of the four models for viruses, subjects only felt the need to actively protect themselves in one. This group thought viruses silently stole personal information such as credit cards, but did nothing to the computer itself. To combat this, they regularly used anti-virus software. The other three groups generally thought viruses caused harm to the computer through crashing or erasing data. While all thought viruses could be installed through active methods (e.g. running a program), some also thought simply visiting a website could install a virus. One user related the automatic nature of web cookies to how viruses could be installed. For these three models, the most protection needed was to avoid unknown or insecure websites.

The four folk models for hackers provide greater insight into how mental models are developed. The first model describes hackers as ‘graffiti artists’. These hackers are typically skilled individuals who want to show off by hacking a computer. They tend to choose targets at random, and little can be done to protect against their attacks. The next model describes hackers as ‘burglars’. These hackers look around a computer for pieces of financial information that they could use themselves. They choose targets opportunistically, so protection is important to ward off attacks. The next group feels hackers target ‘big fish’ specifically, so subjects need not worry because they aren’t important enough to be targeted. The final group sees hackers as ‘contractors’ who steal information for criminals. These contractors typically target databases to get more information at once. Here, subjects protected themselves by only using services they believe would handle security correctly.

This study clearly highlights how mental models are created. People base their models off of something concrete, that they have an understanding of. Wash also found that subjects would extrapolate their existing models to work with new scenarios. An example of this was seen in the user who related viruses and web cookies. Finally, this study demonstrates that mental models are shared among individuals. While this study was primarily qualitative, every model discovered was shared by multiple subjects.

From this look at mental models, there are a few key takeaways that can be summarized as follows:

  1. Mental Models often lack details or over-simplify — In the models for the internet, subjects typically only knew about services they directly interacted with. As a result, their model was simple, and it excluded potential places data could be leaked.
  2. Models are often based on real-world examples — The models for hackers made this especially clear. The ‘graffiti’ hacker chose targets at random, and there was little a subject could do to protect themselves, similar to graffiti in real life. Alternatively, the ‘burglar’ hacker could be thwarted because they are opportunistic, like a thief looking for unlocked cars. While some users had mixed mental models, many only considered one type of hacker.
  3. Users make their security decisions based on the mental models they have — For example, those who believed viruses were actively installed only had to be careful about the software they downloaded. In their model, there was no concern with visiting an insecure website. For those thinking of hackers as graffiti artists, they didn’t bother with security because there was nothing that could be done to prevent an attack. If an attack vector is not present in a user’s mental model, they are more likely to make insecure decisions that don’t consider that vector.

To lead end users toward better security decisions, there is a clear path forward. Developing users’ mental models will help them to understand attack vectors and make informed decisions. This is not a trivial task by any means. A mental model that is easy to understand but covers all vectors is difficult to craft. The next section looks at how to best communicate potential risks to users.

Risk Communication

Risk communication is a well-studied field when applied to natural disasters or health concerns. However, it has only recently seen applications in computer security. At its core, risk communication is the process of technical experts informing non-technical users of the choices they have and the potential consequences of each. Of note, risk communication is not about forcing users into making the most secure decision. Instead it is about allowing them to make an informed cost-benefit analysis.

This was a main focus of [4] who sought to improve Signal’s authentication ceremony. This study attempted to first increase risk perception, a user’s feeling of there being a risk involved. By redesigning dialogs to more clearly communicate the possibility of a man-in-the-middle attack, there was a significant increase in users who perceived some risk involved. Once users knew there was a risk involved in sending a message, the next step was giving them an informed choice.

The authors of this didn’t force users to make a specific decision. First, they renamed the ‘authentication ceremony’ to a ‘privacy check’. This was done to better convey the benefit of the process. Next, they explained that the privacy check would take a few minutes and that users should avoid sending sensitive information if they choose not to do it. Thus, users understood the benefit of performing the check, but if they didn’t have the time, or simply didn’t care, they could ignore it while understanding the risk.

This study demonstrates the key idea behind risk communication: allowing the user to make an informed decision. The most secure option isn’t necessarily right for all users, so it is important they understand the pros and cons of each choice. Similarly, it is important that the potential risk is clear. If users do not perceive a threat, they have no reason to perform a security action. By communicating both the risk and options clearly, users will be able to make the best decision for themselves.

Another study [5] looked at improving firewall messages. They state near the beginning of the paper that, “Risk communications in computer security have been based on experts’ mental models, which are not good models for typical users.” They base their work off of another study [6] which found physical security mental models to be the best fit for explaining computer security to non-technical users. For their study, they replaced textual firewall messages with images depicting a person trying to get to a computer behind a locked door. Depending on the potential risk of allowing the application through the firewall, the person was either a burglar, unknown entity, or a happy individual.

The drawing-based firewall messages. From left to right: malicious application, unknown application, safe application. Credit: [5] Raja et al.

They found that their drawing conveyed more risk to study participants when compared to the textual messages. However, 1/3 of participants still preferred the text-based messages. It is also worth noting that the mental model created by the diagrams is potentially flawed. It depicts the application, ‘easyChat’, trying to gain access to the computer. However, the application has already been installed. The firewall message is actually about allowing the app to use the internet. It is possible this model could falsely lead users to believe that blocking the application protects their computer, when in reality harm can still be done (e.g. erasing data).

This points out a difficult balance in risk communication. It is important to simplify for normal users, but this may lead to inaccuracies and complaints by technical users. It is unlikely for there to be a one-size-fits-all way of communicating risk. One potential option is to include a drop-down that contains technical information in text form.

Both of the studies above highlight the importance of helping users make informed decisions. Neither forced users into making a specific decision, but instead enhanced the users’ mental models to allow them to weigh the costs and benefits on their own.

These studies also demonstrate the difficulties of successful risk communication. Both went through multiple rounds of effort in redesigning the applications for maximum clarity. Even still, there are more improvements that could be made. [7] provides some excellent guidelines for risk communication. In particular, they mention three key things to consider when planning how to communicate risk:

  1. The goal of the communication (e.g., is it to educate users or draw them away from a security decision that may be too risky)
  2. What type of security messages and communication strategies would be most useful (for example, strategies reliant on visuals and mental models)
  3. The characteristics (e.g., level of knowledge and education, literacy and numeracy, mental models, attitudes/beliefs about the security issue) of individuals targeted by risk messages (e.g., knowledgeable Web users might desire more specifics than novice users regarding a security risk posed by a potentially malicious Web site).

These guidelines can serve as powerful tools when crafting messages for risk communication. They will help in reaching users in a relevant and understandable way. As a result, the message will enable users to make security decisions that fit their needs.

Conclusion

This post looked at two specific areas of research in usable security. First, mental models help in understanding an end user’s decisions. Users typically make rational decisions but are confined to their limited mental models. Many users also draw upon real-life examples for their models, but they may also extrapolate inaccurate parallels. Improving a user’s mental model of a threat will help them to make informed decisions.

To improve user’s mental models, risk communication can be used. With risk communication, it is important to clearly and succinctly define what the threat is. From there, users should be empowered to make a decision that best fits them. This will involve explaining the costs and benefits of each decision.

While it is easy to see end users as cavemen, making incoherent security decisions, these poor decisions are ultimately the developer’s fault. When designing applications, it is important for developers to understand the end user’s mental model. From there, they can communicate risk in a clear way that both utilizes the model and fills in any gaps. Finally, developers should clearly describe the pros and cons to each security decision. By doing this, users will be able to make informed security decisions that fit their needs, without needing to have a technical understanding of the underlying threat.

Citations

  1. Ruogu Kang, Laura Dabbish, Nathaniel Fruchter, Sara Kiesler, ‘My Data Just Goes Everywhere’: User Mental Models of the Internet and Implications for Privacy and Security. In Proceedings of SOUPS 2015.
  2. Wash, Rick, Folk models of home computer security, Proceedings of the Sixth Symposium on Usable Privacy and Security. ACM, 2010.
  3. National Research Council et al. Improving risk communication. National Academies, 1989.
  4. ‘Something isn’t secure, but I’m not sure how that translates into a problem’: Promoting autonomy by designing for understanding in Signal, Justin Wu, Cyrus Gattrell, Devon Howard, Jake Tyler, Elham Vaziripour, Kent Seamons, and Daniel Zappala.
  5. Fahimeh Raja, Kirstie Hawkey, Steven Hsu, Kai- Le Clement Wang, and Konstantin Beznosov. A brick wall, a locked door, and a bandit: a physical security metaphor for firewall warnings. In Symposium on Usable Privacy and Security (SOUPS) 2011.
  6. L. Camp, F. Asgharpour, D. Liu, and I. Bloomington. Experimental Evaluations of Expert and Non-expert Computer Users? Mental Models of Security Risks. Proceedings of WEIS 2007, 2007.
  7. Jason RC Nurse, Sadie Creese, Michael Goldsmith, and Koen Lamberts. Trustworthy and effective communication of cybersecurity risks: A review. In Workshop on Socio-Technical Aspects in Security and Trust (STAST 2011). IEEE, 2011.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store