On keeping secrets in comfort

Eugene Pilyankevich
6 min readOct 24, 2016

--

As someone who used to spend a lot of time analyzing how security incidents happen, I frequently think how little, unfortunately, technical peculiarities have to do with most security breaches.

Making people keep secrets is very hard. Making groups follow guidelines is laborious. Making secrecy policies work in groups is even harder.

You can fail only once.

To prevent OPSEC failures, we’re constructing hi-tech clutches to human problems, frequently without trying to understand the nature of the problem. Enforcing good behavior technically works, but not understanding true nature of underlying failures leaves a lot of blank spaces in our models.

This post is an attempt to approach these problems from an angle, somewhat untypical for security discussions. Because otherwise, all that’s left is, actually, silence:

But if do you have to talk — there’s a lot you have to think about first

Secrecy failure as a spherical cow

Let’s conduct a small mental experiment on evolution (degradation) of secrecy within a group:

  • We have a closed group of people who’d like to keep their communication secret from, say, law enforcement.
  • They get introduced to e-mail, for the first time in their lives.
  • They get instructed to use e-mail only with PGP

Step 1. Involuntary traitor

Phase 1. Set up:
We’ve set our communications channels, users exchanged keys and right now everyone’s fine.

Based on (rather paranoid) policy, to communicate to his secretive peers, users has to type keychain password on e-mail client’s start and key password when encrypting the message.

Phase 2. Add comfort:
To operate efficiently, the user requires convenient e-mail app across all platforms. More devices — bigger attack surface. Risk grows, but not drastically.

Phase 3. Add some real-world circumstances.
Now, User X goes on holiday and takes pictures of nice landscapes with his cellphone and his wife asks him to send her the pictures. Over e-mail to her regular Gmail account.

To succeed, user X, who is a secret group member:

  • either disables default encryption,
  • or creates himself Gmail account.

Which is basically the same outcome — user X gets introduced to a workflow where everything is simpler, you don’t have to input keychain password, you don’t have to limit yourself to counterparts within secrecy groups.

Or, in much simpler case, User X forgets to tick the ‘encrypt’ checkbox, and User Y accepts and continues communication.

This is where the fun starts.

Step 2. Induction

Now, let’s expand our mental experiment and see how this phenomenon spreads to the group.

we all know that groups just tend to somehow lower their security standards when some minor part of the group ignores them. we just don’t notice the

Lately, when reading N.N. Taleb’s “The Most Intolerant Wins”, which partially inspired this post, some of the ideas felt really familiar: once there’s sufficient amount of people rejecting secrecy, the policy goes bust and, at best, becomes a deceptive formality.

Taleb’s post beautifully outlines why:

User X with low discipline will fail to comply to secrecy sooner or later and will give it up completely, but User Y, without such discipline problems with his own behavior will still have to give up the discipline to talk to User X.

Not only such laziness is contagious (being bad example to others), at some point even disciplined members will have to give up secrecy to contact non-disciplined ones.

What happens is just more subtle, less visible version of ‘what the hell effect’, where a conscious decision to give up security regime never happened: people would just convert one-by-one, using an inverse version of network effect.

What introduced the problem? Availability of option in the first place. Whenever the choice is available, sooner or later this choice will be made in the least conscious fashion.

Step N. Failure.

One way or another, the group will break secrecy bad enough to get uncovered by adversaries. In case of this man with a fine moustache, it ain’t no good.

Conclusions

Fixing human behavior with more flexible or more friendly technology does not help, security-wise. Enforcing consistent behavior is nice when you’re drug kingpin and everyone is afraid of you.

What should we do about it? Think, then do.

Flexibility problem

Doesn’t it feel that having both encrypted and regular e-mail is more flexible? It is so, in absolutely cold rational terms.

But tired brain knows no choice, it jumps to default, lazy behavior:

  • Some users will sooner or later ignore the extra 1–2 clicks needed to implement good security practice, because it will work without them, and the group doesn’t comply with the rules too strictly: unencrypted e-mails will be opened and read.
  • User will sooner or later go plaintext by default because most of his addressees are either outside or gave up security policy anyway.

Usability “problem”

Common knowledge suggests that security impairs usability and ease of use. Specifically when human actions are required to keep the security protocol consistent.

Actually, I don’t think there’s anything wrong with having lesser “usability” in some cases. Cognitive strain for non-silky-smooth interaction induces slower, more attentive thinking patterns and is generally good for maintaining consistent behavior.

Why we need dedicated tools

Dedicated secure messengers and file-sharing tools is a simple answer to the problem:

  • They’re explicit and they have safe defaults and they fail safely.
  • Application design is driven by security demands, not complemented by security tools.

They do deliver their guarantees, at price of usability (which, as I suggested previously, is not that bad effect) and yet another app sitting on your phone / desktop.

Protecting communication does not hide it

Having Signal Messenger suggests you have something to hide, right? And what frequently complements minor OPSEC failures? Escalation.

XKCD #538

Covert communication, hidden within regular communication stream, is an efficient way for plausible deniability and protection against rubber-hose crypto-analysis. If, by a happy coincidence, owner of your messaging / exchange solution does not accumulate metadata (Signal mostly doesn’t, are you sure about the rest?).

For really strong security guarantees, protecting knowledge of the secret’s existence is as important as the content of the secret.

Manage user behavior

Security and secrecy have more to do with human behavior than with engineering. There always will be vulnerabilities, stronger and weaker communication protocols, etc. Yet, somehow, a significant portion of security incidents do not require the adversary to possess some unique knowledge — just basic understanding of human beings and a bunch of outdated CVEs.

Addressing these risks is grounded in understanding that human behavior is far from being rational, and when multiplied within a group, simple errors of human decision-making amplify to disastrous effects. Based on my experience, no amount of education and awareness prevents these effects,- but clever choice architecture and constant behavior monitoring do.

If you want one TL;DR you want — take this:

Users of your system might be smart, trained and motivated. But at some point in time, they will be tired, their conscious decision making resources depleted, and they will fail the secrecy regime. Unless you design your security system to be strong with lazy thinkers and distracted minds on board.

--

--