
What US Intelligence learned from Claude Monet
In 1914, Claude Monet, the father of Impressionism wrote of his growing frustration with his deteriorating vision describing how he was forced to memorise where colours were placed on his palette. Colour no longer had the same intensity “Reds have begun to look muddy” he wrote. “My painting was getting more and more darkened”. He was forced to rely on the labels on tubes of paint in place of his own vision. Monet suffered from age-related cataracts that caused yellowing and darkening of the lens of his eyes.
It was perhaps reflecting on Monet’s fate, that the US Office of the Director of National Intelligence set up the Good Judgement Project with a band of forecasters assembled by Philip Tetlock, a world-leading expert in prediction to reassess whether it was looking at the world in the right way. Following its failure to predict some of the most important socio-political and economic events in the 21st century such as the credit crunch and conflicts in the Middle East, it questioned the lens through which it saw the world. The key finding that explains many failed predictions is that humans do not behave rationally when making decisions that involve uncertainty (e.g. taking a bank loan, buying insurance, placing a bet).
Prospect theory
The project drew on research performed by Israeli psychologists Daniel Kahnemann and Amos Tversky in the 1970s which showed that the utilitarian model of human behaviour so favoured by economists was flawed. Utilitarian models assume that we make rational decisions based on maximum utility. Tversky and Kahneman proposed instead the Prospect Theory which underpins the field Behavioural Economics today. The Prospect Theory can be summed up in two graphs:

What do these graphs tell us? The first graph tells us that: we feel losses roughly twice as much as we feel gains; and that we will take irrationally large risks to avoid a loss, yet excessive caution will prevent us from fully benefitting from a gain. The second graph tells us that: we overestimate the likelihood of a gain or loss relating to probabilities of <30% but the opposite is true for probabilities >30%.
Why we are bad at risk-based judgements? Tversky and Kahnemann are silent in the subject but research to date has identified a variety of “heuristics” or rules-of-thumb that characterise how we assess risks. These are summarised in the table below.

How do these rules play out when assessing cyber risk?
Prospect Theory reveals two things: we make risk based decisions based not on the odds, but the way the odds were described to us; and that we are sensitive to gains and losses to different degrees.
Framing risks
How we frame a risk matters: defenders operate in the realm of losses and tend to be more risk-seeking; while attackers operate in the realm of gains tend to be risk averse. This is why defenders will forgo the certain expense of IT extra security protection and live with the cyber risk even if a breach would be much more costly to remedy; and this is why attackers use the tools available to them to achieve more modest results than could be achieved with a little extra investment that increase the probability of much larger gains. Attackers are deterred more easily than we might think.
We should therefore minimise the red area in the first graph by actively deciding what is our cyber risk appetite and by consciously avoiding the irrational risk decision traps described in the table above when managing the risk of a loss. We should also maximise the green area in the first graph by making it more risky for cyber attackers to gain from us.
Framing a successful security investment case
Security investment often fail because it is framed as a definite loss, while the risk of not investing in security is a probable loss. A better way to frame security investments so as to overcome the irrational ways in which we evaluate risk is to present decision-makers with information regarding security improvements as a gain or as a loss avoided: for example, decision-makers should be told that patching prevents malware from exploiting software vulnerabilities which keeps their computer running fast and reduces the likelihood of data loss.
Creating uncertainty for adversaries is the best form of defence.
Our adversaries are human and are subject to the same economic realities and emotions as ourselves. They are naturally risk averse when it comes to making gains: beyond innovating to make the initial campaign a success there is little incentive for attackers to continue when only marginal gains can be made.
There is hope for defenders here. Defenders need to adopt a defensive mindset which involves thinking about resilience adaptability and deterence as well as well as security. Deterrence is most often overlooked. It recognises that increasing the cost to an adversary and making the outcome less certain is as effective at deterring attacks as the threat of retribution.
4 effective deterrents
In a target rich environment, it pays to be unattractive. Here are 4 inexpensive ways to be so:
- Use carrots and sticks: computer network users are the first line of defence against cyber attack. Make staff aware of their role in information security and explain your acceptable IT use policies. Test users and consider applying mandatory training for those that fail ; whilst providing more privileges to those that act responsibly.
- Actively defend your network by collecting logs from your IT systems so that they can be examined for information about the attacker – and make this clear to your adversaries that you do. In the same way, that CCTV does stop crime, it is an added risk for attackers because it creates uncertainty.
- Exploit law enforcement agency (LEA) powers: cyber criminals have vices that can effectively be exploited: they are human and therefore prone to laziness, vanity and errors reveal their identity
- Look litigious: this works for insiders. Those acting with malicious intent will think twice if they believe their employer will use civil injunctions and the law courts to seek redress.
Epilogue
When the cataracts were removed from Monet’s eyes in 1923, he threw out most of his artwork from the previous ten years and returned to his original painting style. His later works have even lead experts to speculate that he could sense ultra-violet light.
The Good Judgement Project (GJP) has been running since 2011, harnessing the wisdom of the crowd to forecast world events. The results show that harnessing a blend of statistics, psychology, training and various levels of interaction between individual forecasters, consistently produced the best forecasting results. The top forecasters in GJP are reportedly 30% better than intelligence officers with access to actual classified information. Full marks to the US Government for making a rational decision to support this project.
If you have appreciated this blog, please give credit to those whose work I have exploited including: my colleague Kelly Shortridge; and leading behavioural economists Amos Tversky, Daniel Kahnemann, Donald Redelmeier, Philip Tetlock, Bob Schneier, Maurice Allais and Nicholas Nassim Taleb. Their work is widely published online.
