Nothing is as practical as a good theory — Kurt Lewin
The previous articles of mine have been concentrating into abstract matters, theories, concepts which are thinking about thinking, one way how to look different matters. Not so much about examples of them but keeping things in abstract level. This post is trying to combine them into couple of examples. However, it is recommended, but not mandatory, to read previous posts. The key is thinking, not a specific action. (note: they are intentionally written in a certain way). Articles: first, second, third, fourth.
On earlier posts I mentioned concept called high frequency activity — trap, which is to act what is seen, and as it creates illusion of success via gaining relative superiority e.g close incident but may not turn special forces approach into conventional activity. So one is forced to act on operative level (tactical in military) when the opponent controls centre of gravity.
Good theory is the best way I know to frame problems in such a way that we ask the right questions to get us to the most useful answers — Clayton Christensen
The above quote means, that in practise, you need theory to frame the problem correctly before you start solving it — more complex the environment, more this is needed. The key is to go root of the issue, through first principles of physics, to see from what the matter at hand actually has been built from — the problem has been trying to use too big theories.
Special forces — approach is gaining relative superiority for the moment with simplicity, security, repetition, surprise, speed and purpose, and how less is more, meaning one could gain upper hand with less resources than the opponent, but by definition gained relative superiority may not last long. On cyber related incident this may be like ad-hoc hardening, such as blocking traffic, changing access controls, closing down the network, blocking user which is resource intensive, and may not scale for the moment if there is false flag operation and attacker makes actual attack elsewhere, or there are multiple attackers, or thinking only efficiency and forgetting the effectiveness. And when situation is over, are rolled back to previous state.
“the first flaw is the error or excessive and naive specificity. by focusing on the details of the past (current) events, we may be diverting attention from the question of how to prevent future tragedies, which are still abstract in our mind, to defend ourselves against black swans, general knowledge is a crucial first step)”
This is exactly the same as in what was described on the high frequency activity — trap but told differently, from narrow to general knowledge, trap is tactical(operational in civil world) vs strategic thinking. Erdös and Rényi also proved on graph theory that simple (Occan Razor) in a complex area — such as cyber security, politics, economics, internet — is an exception, not a norm.
Instead, we should look also with high beam light as well instead of low beam only, sense more than see. That happens when personal learning, organizational model, way of working etc are modelled after OODA-loop, where the Orientation part takes Gary Klein’s Recognition-Primed Decision making into account. That means experience to see what has happened(interpreting it correctly as well), and skill to notice deltas with intuition. Experience is a routine, as unlike dogs, humans are not on alertness level yellow (Cooper’s colour codes for situational alertness) all the time, but are on autopilot.
With regards an example for relative superiority, I use personal example for the past (since I know what happened, it is easier to interprete it into a writing).
2010, I had the honor to join to the team called Blue5 and thus participate into NATO CCDCOE organized Cyber Defence Exercise called Baltic Cyber Shield — a precursor for the current Locked Shields — CDX. The team I was in won the whole exercise, but we also were getting into situation where attackers was not able to do anything without our consent (Greetz to Blue5). We did not patch anything. We were dropped into unknown environment without any practise time to the network (only some documentation available to create expectancies and that was not up to date — as an inject for the game to create ambiguity).
The examples below are technical, but the thinking behind applies to higher level non-technical things as well. Pick the thinking patterns and adapt&create and not the specific example.
The reason for win was not a specific product, nor technology — but our approach, team composition, skills and knowledge. Since we did not patch anything, we concentrated on preventing attacks and not vulnerabilities per se. I have been trying to codify the approach in different ways in order to make sense of it, and not all mentioned below were specifically planned to follow such even though it happened on the fly, but is mostly interpretation of group of people working on their own areas together as a team’s towards shared purpose in synchronization adjusting what to do on the fly
“You do not need to be fast if you control time”.
On earlier posts I wrote about the concept called near-mid-far, which is a systematic way to look control and time for a specific issue, like; far in control wise can be datacenter handled by 3rd party, and near in time wise can be a realtime radar.
We did similar things for all the systems, but I am telling here only some of the Windows — operating system level activities here;
- Near — one person was iteratively going through some binaries on the filesystems of computers which were at the time commonly used in attacks (cmd, ftp, tftp etc) to change access control lists to interactive only. This means that if gaining System level privileges with an remote exploit, it cannot fork a remote command prompt since desktop is not visible (granted, System can change ACL and reside on memory only, but at the time this was common approach to run those binaries which also brings the requirement constantly evaluate your position in changing environment ) — and knowledge about this possible consequences is important since CSRSS — process needed access into cmd.exe in order to create profiles. Which means that if user had not logged in before, the profile does not get created, but for this moment it was to gain relative superiority towards the attackers by adding confusion and prevention to themr, and this also brings the issue that this specific change was “what” and should not be blindly copied into any other environment, but “why” — the thinking behind, instead. After going around all of the machines, and if everything worked person went through another round by adding more binaries into ACL change list. This allowed to not disturb “business” and also increasingly make systems harder to penetrate. This was near(fast) in time wise.
- Mid(semi-fast) was a person doing Group Policies to machines, also unconventional ones which work on situation but may not for corporation other than during incident. Both 1(near) and 2(mid) were buying time (slowing attacker down) and thus giving more time to 3(far);
- Far(slow) which was to install application whitelisting which takes time to install, make it learning the system and then enforcing protection after which 2 and 1 activities can stop or slow down.
All these activities while being individual instruments, played music together. And since they were done in different speeds, they could be modelled after three separate different speed OODA loops as well contributing to slower whole team’s OODA loop, and being observed to each other. Same multiple different speed OODA loops do happen in corporations as well.
In our case the special operations -Â theory worked well for the moment, but our activities brought conventional success as well for the longer term meaning antifragile approach benefitting from the difficulties, in Windows in a form of whitelisting — without playing music together with other “instruments” (ACL, Group Policies etc), these systems could have been “owned” by the red team before the preventative control would have been up. So it is not about technology but how you use them.
Similar relative superiority approach was taken at totally another place, when a large network was infected, and we noticed that after the initial intrusion the installed malware exploited specific security application (yes, they are software as well and since it exploited software and not operating system, the working directory was application’s own installation directory — observation and use this), and it dropped a binary to a application’s directory to be executed by it for further access and functionality. Since no patch or fingerprints were existing, essentially 0day for exploit and maybe first victim — patient zero — for malware, we created an empty file with a same name for all the machines, and made deny Access Control Lists which mean that exploit was successful but the payload in it was not able to write to disk and since file was not existing the payload was not able to execute it essentially contaminating it slowly.
The above activity (ad-hoc hardening)Â to adapt to a problem at hand on the fly to gain relative superiority to be able to take systems to control and longer term conventional protection. (yes, if it would have worked with memory, or creating random filenames etc the response would have been different, but this means observation is important and especially the orientation with skilled and experienced people to make decision how to proceed with action in this specific environment.
So, in terms of looking conventional things or like reacting only for incidents when we see them -we focus into what is visible and available (which often is nothing more than randomness, and yet we wrap a narrative around it — Nassim Taleb), it may not be seen easily how it actually should be approached with both high and low beam lights on. (may be that in order to gain relative superiority into specific issue at hand, but actual response should involve strategic thinking to handle the operative (tactical in military) issue effectively. otherwise you are running still. In Blue5 case, the team shaped the environment for team’s advantage by getting inside attacker’s decision cycles, and thus controlled time and effectively reduced need to be fast.
This also brings up to look environment (theatre where things happen) with principles what is wanted to be achieved — like make things obscure for attackers means they need to figure out things and potentially can be noticed, slow them down, force them generate noise — which helps to detect and after relative superiority to attack to look things holistically and manage that issue either closing the hole allowing the entry in or create more strict detection etc. And not going through tactical level (operational in civil world) high frequency activity — trap only, such as patching, change passwords, unless they contribute to principles and goals. This also means that if there are any hardening guidelines, they are guide only — do not copy but adapt. Similarly like Olaus Petri’s guidelines for judges telling what the law is for judges.
That being said and written, one thought could be how this would be relevant when clear win may not be achievable, as the control is on attacker’s side for how, when, what and where — how then not to lose? Is attacker’s centre of gravity same as defender’s?, when defender may not have seen the attacker, either by not knowing how attacks work technically and coordination wise or just being in uncanny valley and thinking may be disturbed by cognitive biases. This is where experience, battle-hardening, counts. Taking into account that not all face the same experiences, so they do not follow the Gertrude Stein’s a Rose is a Rose is a Rose, and makes the practise to work in ambiguity — fog of cyber so to speak — a necessity for short and long term impact, and Fingerspitzengefühl.