About Usable Security

Here are a few notes I jotted down during talks by Adrienne Porter Felt, Jon Oberheide, and Matthew Smith on the topic of usable security. These talks were part of Enigma, a conference launched this year by USENIX.


Why is Usable Security Hard, and What Should We Do about It? (Adrienne Porter Felt)

A good security feature should be invisible when you don’t need it and helpful when you need it. Unfortunately, this is really hard to do. For instance, HTTPS certificate validation errors are hard to understand for end users. This lack of usability makes the fight against phishing attacks harder.

“Usable security is a science.” When working on usable security, researchers and engineers should be defining controls, hypothesis, and test them. In no case should they trust their gut or common wisdom.

To convince yourself of the importance of a scientific approach when it comes to usable security, here are a few example scenarios encountered by the Chrome team at Google.

Example 1: browser notifications

Very rich APIs are coming to the web after being prominent in the mobile app ecosystem. As a consequence, user notification systems that were only available to mobile applications until recently are now available to websites as well. Many concerns can be raised about the implementation of notifications in browsers: they might be spammy, could be used for phishing, or even simply be confusing to users (since they are not used to have websites interact with them once they close the respective tab in their browser). Thus, when implementing notifications in the browser, careful care must be taken to avoid annoyance to users. A rigorous study performed by the Chrome team confirmed the following hypothesis:

Attention and acceptance rate to notifications drops dramatically after several prior requests.

Example 2: safe browsing

Safe browsing is the feature that tells Chrome when a website is malicious — it also exists in other browsers. The warning tries to convince users not to proceed to the website. Here again, Google wanted to test a few ways to display the warning so as to increase user reaction of going back to security. The difficulty here lies in needing to provide a message both brief and specific about the risk. Here, the Google team ran a series of A/B tests with various warning messages to measure the difference between a simple informative message (e.g., This file is malicious) and an informative message along with a command (e.g., This file is malicious. To stay safe, don’t run it). The results showed that adding the command did not make any difference. Intuitively, we could reasonably have hypothesized that adding the command would lead more people to go back to safety. This counter intuitive result shows the importance of not listening to one’s gut when working on usability security. Indeed, security experts are strongly biased and are not representative of the general pool of users. Conducting such a scientific experiment has a cost but avoids rolling out a security feature that would not have worked. The following paper provides more details.

Example 3: internationalization

Most studies performed in academia are ran on American English speaking young individuals(often from college towns…). Indeed, it is easier to run a study with people available around you. However, this leads to a strong lack of available literature evaluating the different behaviors with respect to security in different cultures. As an example, HTTP error appearance rates vary by country. Another example: Japanese users were found to not be responding to Chrome warnings as well as users from other countries.

To conclude, this talk emphasized the need to consider usable security as a science: security researchers and engineers should do great science and share it with everyone. Even for industry, there is merit in taking a scientific approach (e.g., it avoids launching useless features). For academia, this a research topic very useful to industry with lots of direct applications.


Security and Usability from the Frontlines of Enterprise IT (Jon Oberheide)

The impact of authentication breaches is important. Some of the largest examples include Target (direct impact on 40M consumer credit cards), Adobe (indirect impact on 153M end user credentials), Juniper (meta impact on thousands of organisations). Security is improving thanks to patches, regular updates, and bug bounties. However, breaches are increasingly more complex and devastating. There is no single point of failure in security but this talk puts an emphasis on the intersection between security and usability.

The security industry and usability

The security industry ($88B) promotes complexity and sophistication over simplicity and usability. As a result, complexity is perceived as more effective by users.

This is due to aggressive visuals and terminology staging a militarization of security. Oberheide argues that there are no battles or front lines and that the security industry should instead focus on making security products easy to use for users to promote wider adoption. To avoid complexity, security solutions should avoid praising defensive depth: line ups of multiple solutions to reduce the probability of an attacker going passed all layers of security. This leads to complex and expensive solutions that do not scale.

Oberheide argues that these simple measures can mitigate many attacks:

  1. Strong authentication
  2. Up to date devices: among Duo Security users, 71% android devices, 75% OS10 devices, and 50% iOS devices are out of date.
  3. Encryption of content

Organizations and usable security

Security experts need to continue advocating fundamentals of security outside of their industry.

The FTC’s “Start with security” program has some of the most sane guidelines for organizations to embrace security.

Because architecture evolved from mainframes, to clients/servers, and finally to cloud/mobile, lots of security controls that used to be deployed are not relevant anymore. Oberheide argues we need to follow the same end-to-end idea than with the Internet’s architecture: move security to edges.

Usable security and end-users

Interaction of security solutions with users is often negative. This creates a negative mindset regarding security. One good practice highlighted during the talk is from Slack. They created an anomaly detection department. When an anomaly is detected, they directly contact the end user who created the anomaly to verify the action was intended. Getting feedback from the users is very important when designing and implementing security solutions.

Does usable security have an indirect impact on security posture within an organization? Are happy users less susceptible to social engineering attacks?

Oberheide is convinced we should promote safety instead of security. Security solutions should make it easier to exhibit safe behaviors instead of implementing security restrictions.


An example of usable security: 2-factor authentication

2-factor authentication is typically performed using hardware tokens. However tokens are expensive and provide a poor user experience. Alternatives include phone calls and sms but both rely on insecure cell carrier channels. Software tokens are hard to use because of the countdown timer. This explains the current rise of 2-factor authentication through push notifications. Users can easily confirm authentications requests using their phone. The technology provides strong transport security and asymmetric cryptography. Will 2FA authentication one day replace passwords?

Usable Security — The Source Awakens (Matthew Smith)

Matthew Smith cited this quote from Angela Saase:

Users Are Not the Enemy

and extended it with the following quote:

Developers are not the enemy either.

To motivate the importance of implicating developers (anyone involved in the technical side of a product) in the security process, Smith gave an example. In 2013, a study showed that 610 000 bad certificates out of 4.5M unique certificates showed errors on end-user browsers. Most of these certificates were installed by administrators on systems used by end users.

To show the impact of implicating developers in the design of security solutions. Smith mentioned that his team worked with 15 developers to understand how to secure HTTPS on Android. These discussions gave them enough information to develop a framework valid for over 13,000 apps.

Another example of why developing tools usable for developers is useful was given for malware detection. Typically, information recovered from malware analysis is very lossy. Smith’s team improved a decompiler and showed that it multiplied the malware detection rate of students by 3 and of malware experts by 1.5. The resulting decompiler, DREAM++, will be released soon.

Smith insisted on the importance of getting in touch with security researchers and system developers to figure out which security solution is more usable. Indeed, problems solved for security researchers and system developers do not have been to be faced by end users.


Please leave any comments you may have below or reach out to me on Twitter at https://twitter.com/NicolasPapernot