Side effect of Pegasus malware

Eugene Pilyankevich
4 min readAug 26, 2016

--

This rant started as a discussion with a friend, who, being horrified by yesterday’s news, went on asking:

How should the Apple vulnerability affect day-to-day decisions in designing secure mobile apps? Why should engineers waste time implementing access control, active compartmentation, positive security model, encrypt everything, manage keys and quadruple their development budget, if, in the end, the platform their code runs on will fail them?

My answer has to do more with our minds than actual technical details of vulnerabilities in Pegasus case. Hype around this spyware and corresponding bugs reminds all of us that the way we direct attention to the news has more to do with habits of our attention than to actual content of the news.

We tend to direct attention to loud, rare events, and mark them as important. This is a result of evolution: through historic times, we’ve survived by being able to detect rare unknown threats and mobilize ourselves.

Problem is, these anecdotal cases affect our decision making, and regular engineers (who are quite distanced from computer security) are asking things like the discussion, which led to this rant: “What’s the point of building secure apps if the underlying system itself is flawed and I can’t do anything about it?”.

Being involved with designing products in one of the most hype-charged markets, cryptography, I might see the world through distorted optics, but this happens all over the place, and each year, as security problems become more public, it gets worse.

Application security as an exercise in balance

Planning application security is a challenge for balance in many aspects: the balance between threats and countermeasures, usability and guarantees, flexibility and maintenance.

But most of all, it’s the balancing challenge of effort vs result: the goal of security system is to stop armed adversary of a certain level of skill and power, not to achieve some “theoretical 100% security” (which is rather abstract idea, anyway).

Pegasus spyware and iOS threat model

Practical security is all about threat prevention, which, in turn, is based on consistent threat model. Part of the threat model is within your reach, part is not. There is no crypto math to prevent rubber-hose crypto-analysis (although plausible deniability feature limits attacker’s persistence sometimes).

Threat prevention should be grounded on two coordinates: there are objects you trust and objects you don’t, there are properties you control and there are properties you don’t.

Any valuable news should either:

  • update our threat model with new types of threats and require re-assessment of values and principles — these are being actually valuable pieces of data.
  • or provide anecdotal evidence that even the best architectures are subjects to successful attacks.

Does Pegasus Spyware add anything new to threat model around iOS?

  • iOS can be remotely jailbroken via a sequence of vulnerabilities: possible, we knew that before.
  • Apple still has some vulnerable code: who would have thought otherwise?
  • Decrypted data in-memory can be stolen even if your app is super-secure: captain obvious.

All of the above boils down to one thing: Apple is prone to security problems as much as anyone else. Well, that shouldn’t come as a surprise anyway, from the authors of goto fail; code. Yet Apple works hard to eliminate any detected problems in (mostly) well-thought and consistent manner.

Apart from that trail of thought, Pegasus case does not generate any valuable insights into security nor it does change the threat model, just reinforces the obvious suggestions.

However, Pegasus case tells us something about us.

It shows our own mental limitations and cognitive biases, combined and multiplied.

In the real world, we somehow care more for 20 people killed in terrorist attack nearby than for 20 million killed in the civil war in Africa. This is a function of our attention, not a subject to moral judgment, and in some cases, it even makes sense for your own survival (which indifferently controls your judgment, whatever your ethical views on the world are).

Yet, this principle seems to fail for more abstract things, distributed evenly: somehow in the mind of many, theoretical vulnerability which can now be exploited only by the proprietor of rare technology, is more dangerous than your absence of basic encryption in locally-stored sensitive data in your app or incompetent SSL configuration.

Some useful outcomes

However, there are some practical insights this case gives us:

Strong encryption actually stops (some) nation-state adversaries: these ‘unknown governmental hackers’ have to use expensive sophisticated malware to snoop on private communication.

Victim still has to assist the attacker: even the most sophisticated attacks are as good as ‘click this link’. Which, in turn, is eliminated more and more by Apple’s crusade for secure user experience within a pre-defined set of rules.

You are your only saviour: as an application developer, all you can do is build additional security where your threat model suggests you to — and where you can. There is a lot you can do without becoming professor in computer security.

Summing up

There are a lot of security properties and procedures we control: consistency of our security strategy and tactic, basic protection of sensitive assets, key management, access policy. Without these, adversaries don’t need any sophisticated remote jailbreak exploit packs — assets will be leaked via much more banal pathways.

We can’t change the world and the way news are presented. But we can consciously focus on things in our control, learn a thing or two about consistent security posture, design systems based on proper methodologies — and enjoy the 0day spy movie craze in the spare time.

Links

  1. Lookout’s Pegasus Technical analysis
  2. The Million Dollar Dissident: NSO Group’s iPhone Zero-Days used against a UAE Human Rights Defender

--

--