Everything will be broken

This is a small handout doc, following my presentation at SecurityBSides Kyiv 2016.


After many years of working as an engineer, then technical executive, I’ve been stumbling into the same situation over and over again: groups of very smart engineers fail to protect million-dollar-worthy infrastructures from 13-year-old kid from Venezuela. Banks, passing dozens of compliance and regulatory audits, leak data via open PHP resources.

On the flip side of the coin, simple infrastructures, maintained by 2–3 people, stand against thousands of attacks for decades without a problem.

Few years ago I started to think: if technological advancements in security don’t really help, maybe we should step back and take a look at the bigger picture?

This presentation is an attempt to talk about principles and models, that should drive security design.

Everything will be broken

First, let’s formulate threat model, that is both easy to understand and easy to remember while we’re discussing more complicated matters:

The only efficient approach to thinking about modern infrastructures is planning for multiple unexpected failures: your security posture should not be altered by any sequence of vulnerabilities. Or, at least, try hard to be like that.

It involves completely different thinking, an attempt in which is this presentation.

Why so?

We, as engineers, are biased for tools, exciting engineering ideas, etc. Thinking about processes, principles and the big picture is blocked by mental laziness, because thinking abstract large-scale thoughts involves a lot of bias moderation from conscious mind. That’s expensive. Getting excited about latest SSL flaw is not, in fact, it’s reassuring.

1. We let them in

Open Infrastructures: your perimeter is not your perimeter anymore. Public clouds, third-party API dependencies
BYOD: another way of saying “bringing external ungoverned piece of hardware that can execute code within trust perimeter”.
Blind Package Installation: remember how people randomly installed Docker base images? Or dockerfiles building packages without verification?
Dependency creep: Do you know what dependencies you install when you install the fancy node.js library?
Bad trust sources: stealable tokens; session hijacking; questionable auth protocols (OAUTH2 WAT)
Smart ‘thin’ clients: browsers and mobile clients, they know how to do security right, do they? Browsers have plenty of problems with code trust and crypto.

2. We let them get everything

  • Cryptography is expensive: while using crazy slow scripted languages and odd network frameworks is not
  • Compartmentation is inconvenient: because if you don’t plan it properly from the beginning, it will require exceptions
  • Keys together with data: some engineers still think that keeping keys in the separate table / separate file is OK
  • 1FA inside trust perimeter or no auth at all: this means trusting your services and infrastructure

3. 3 main reasons

  • Reactive security: designing tools to mitigate known attacks, instead of designing measures to guarantee security consistency
  • Wrong focus: focus on tools, not principles
  • Magical thinking: thinking that many separate measures are the good substitute for the consistent security system, ‘things will evolve into consistent system themselves’.

What shall we do?

Most important risk management ideas haven’t changed in last 3 decades at all. No, wait. In last 15–20 centuries.

Captain Obvious section

9 important models to keep in mind when planning security measures and tools:

… and the most important one:

Keep it simple: minimize attack surface by minimising complexity; don’t impair usability by over-engineering;

There are just models, and security is about doing practical things, right? Not so. Trial and error is good way to study problems with little risk. Creating atomic bomb by trial and error would lead to extermination of human life on Earth.

I believe that evolving your technical infrastructure while maintaining strong security posture is high-risk enough to think of models beforehand.

Here are a few interesting links:

Practice! (ok, mostly)

1. Proactive tools first

Reactive tools and approaches protect us from known attacks.

Proactive tools enable defence against unknown techniques and attacks by enforcing good behaviour patterns (certain system components state, which is consistent) to our infrastructures.

Example: binary monitoring

  • reactive: monitor FS and memory for known signatures and behaviour (antivirus)
  • proactive: monitor FS for all changes, pick alarms/events by policy (HIDS), that might include anti-virus signatures too, but not limited to.

Example: web firewalls

2. Love boring tools

SSL is not boring:

  • BEAST, POODLE, Heartbleed, DROWN, LogJam, FREAK
  • constant race of attacks vs fixes
  • get excited all time, write white papers and blog posts
  • good for researchers, bad for engineers and end-users
The crypto users’ fantasy is boring crypto: crypto that simply works, solidly resists attacks, never needs any upgrades.

Try NaCl and Themis, then compare your experience to trying to use Diffie-Hellman exchange (only one small component of what aforementioned libraries do) in Libcrypto/LibSsl-dev. Now you know about reasons boring crypto.

Boring tools have:

  • threat model
  • acceptance criteria for algorithm properties (for example, non-path-dependent code eliminates side attacks without knowing about side attacks)
  • have strong math proofs
  • keep the user in mind

SQL injection filtering on firewalls is exciting programming exercise with quick reward. Writing prepared DB statements is boring.

3. Trust and data protection

crypto is easy; key management is hard

Data owner is the sole source of trust. How to manage this trust?

  • end-to-end in communications
  • storage keys outside storage perimeter
  • lead chain of trust to source — secrets user knows
  • store symmetric keys in asymmetric containers

Shameless plug: things we do in Cossack Labs

At Cossack Labs we’re developing several techniques to manage trust and protect data. In one cryptographic model, we aim to combine:

  • secret-key (symmetric) crypto: performance and strength against most attacks
  • public key (asymmetric) crypto: owner-bound, directional, enables to map real-world relationships, enforces compartmentation, less efficient and more resource hungry
  • decrypt on ends: deliver high-quality cryptography to front-ends via leveraging
  • compartment via public key cryptography: finite storage keys should be random, they should be accessed via user-bound public/private keypairs. This way leakage is minimal.
  • limit symmetric key leakage by having N keys for M elements (N<M), distributed randomly, the key to record is matched only in asymmetric “messages”

But if trapdoor functions, which make asymmetric crypto strong, fail, what happens?

The bigger N is the less damage for each recovered key.

4. Limiting proliferation

Strong compartmentalization:

  • firewalls: good subnet isolation, no trust to resources.
  • storage: no one-factor master keys to anything.
  • trusted network: does not exist; build ephemeral based communication even within trusted perimeter
  • repeated authentication: repeat authentication with different factors; crypto-reinforced OTP;
  • protect requests and authentication: Zero-Knowledge Proofs as universal auth mechanism (SMP in Secure Comparator, OTRv3) and key derivation technique (SRP, SPAKE2)

5. Afraid of performance penalties?

Just follow these links:

And a very interesting whitepaper without fancy preview: https://www.cs.ox.ac.uk/files/2859/ares_main.pdf

Last tip: how to succeed at (almost) anything

Time and resources are finite, so choose what to spend your time on. Keeping this in mind, invest in decisions which fruit great results, not great excitement.