Cyber Intelligence and the Imaginary Other
Edward Luttwak, a respected military strategist and historian who has been called the Machiavelli of Maryland, was recently interviewed by Harry Kreisler on UCTV’s Conversations With History regarding his book “The Rise of China vs The Logic of Strategy”.
During his interview, Luttwak described a phenomenon that he referred to as “Great State Autism” that occurs when great states focus too much attention on internal matters and lack the situational awareness of the world around them that smaller states with equal development have. This is a problem, says Luttwak, because it teaches you nothing at all in dealing with people of other cultures.
I’ve created a short clip from the one hour interview below. During the clip, Luttwak provides examples of how the U.S. and China engage in a kind of internal autism (the U.S. trying to turn Afghanistan into Sweden and Chinese ministers who referenced Sun Tzu as they fought each other).
At approximately the 6:00 mark, Luttwak speaks about the imaginary other, a phenomenon that applies to the arcane world of cyber threat intelligence in general and the Crowdstrike Danger Close report specifically.
This is a disaster if you ever engage in conflict. Because you are not aware of the other, what you do is that you imagine an “other” that suits you. The general says I’m going to go left and so the enemy, under my attack, will withdraw. In other words you impersonate yourself and the enemy.
Imagining an unknown “other” in the world of cyber intelligence is not only common, it has become a profit center for many companies including Crowdstrike. You investigate an attack, look for common technical indicators, then slap a name on it like Fancy Bear or APT28, and call it an adversary. You then sell that report to your commercial customers and government agencies as “intelligence” without ever knowing the identities of the people involved or who, if anyone, was paying them.
This lack of “ground truth” was pointed out by malware researchers Eric Nunes, Nimish Kulkarni, and Paulo Shakarian in their excellent paper “Cyber Deception and Attribution in Capture-the-Flag Exercises”:
We note that none of the previous work on cyber-attribution leverages a data set with ground truth information of actual hacker groups — which is the main novelty of this paper.
Crowdstrike’s “Danger Close: Fancy Bear Tracking of Ukrainian Field Artillery Units” is a perfect example of performing cyber attribution with zero ground truth and, as Luttwak described, creating an imaginary “other” rather than making the effort to actually speak with people who know the facts on the ground.
Here’s a clip from Dmitri Alperovich’s interview on PBS Newshour where he promotes a narrative that has almost no basis in fact.
“Essentially this Ukrainian military officer built this app for his Android phone that he gave out to his fellow officers to control the settings for the Howitzer artillery pieces that they were using. The Russians actually hacked that application, put their malware in it, and that malware reported back the location of the person using the phone.”
Crowdstrike built an entire scenario out of thin air about how that app must have been used by actual soldiers who unknowingly sent locational data to the GRU who then used it to pinpoint the locations of Ukraine’s D-30 Howitzers and destroyed up to 80% of them.
The company never contacted the Ukrainian officer who developed the app in an effort to determine if he or any of his fellow officers or soldiers were using infected devices. In fact, no one has found a single infected app in use by the Ukrainian military and the officer has called the report “delusional” (source).
The company never contacted the International Institute of Strategic Studies (IISS) to ask questions about the IISS’s projections of Ukrainian military losses choosing instead to simply run with a grossly inflated 80% loss rate that the IISS has disavowed (source) and the Ukraine Ministry of Defense has condemned (source).
Last week Focal Point issued its own much more thorough report on the Android malware sample that Crowdstrike provided and pointed out what would be needed to prove a link between the malware and the D-30 losses suffered by Ukraine’s military:
“Some reports have attempted to link this malware with strikes on Ukrainian Artillery. We believe that any attempt to draw a direct link between the existence of the POPRD30 malware and Ukrainian military loss would be unfounded.
“Let’s suppose POPRD30 provided precise locational data, we would still need evidence of the following to draw a link:
(1) The trojanized application was installed on an Artillery service member’s phone.
(2) That service member had the infected phone in the field near a piece of artillery equipment.
(3) The phone sent the location of the service member, and thus a piece of equipment, to the C2 service.
(4) A corresponding strike occurred at that location.
“We know from CrowdStrike’s analysis that the original benign application was available in late April 2013 and that the trojanized application first appeared in December of 2014. We also know that up to 9,000 service members downloaded the benign application. However, there has been no public reporting revealing how many Android devices, if any, contained the infected application and much less information on specific phones or tablets that were infected. This directly challenges .
“Without knowing who was carrying an infected phone, there is no good way to obtain information for  or , unless there was some kind of sensor network deployed scanning for network traffic associated with this malware around that time, or the attacker’s C2 infrastructure was being monitored at the time of infection or later. No public data has been released suggesting this was the case.
“Regarding , while it is likely possible to find data on where strikes occurred on Ukrainian artillery, without having any information on what location data was sent to the attacker, we really cannot say that the strike occurred due to the trojanized application.
“To further muddle the issue, the first known instance of the trojanized application appeared in December 2014, while tactical strikes occurred between July and August 2014, roughly 3 months before release of the malware. While it is possible that the malware was distributed before December 2014, there is no public evidence suggesting such is the case.
“In conclusion, the fact that the malware only provides referential data is in some ways irrelevant, as there is still an astounding lack of evidence establishing a causal link between the malware and strikes on Ukrainian artillery. Lack of knowledge with regard to the success of the attack (number of infected devices) and the date of the malware’s release, only serve to further obscure the issue.”
Both CrySys Lab (here and here) and Focal Point have published independent analyses of Crowdstrike’s sample but it wasn’t easy to get done. The sample was hosted on Virus Total which requires a minimum membership fee of $300/mo in order to download sufficient data to run an independent analysis; something that many independent researchers cannot afford.
Crowdstrike wasn’t cooperative with early requests from this author or others, and still stands behind the fiction that they have perpetrated upon the press, the public, and the House Intelligence Committee.
While Crowdstrike is currently the most egregious offender in terms of irresponsible intelligence analysis, the entire industry needs to formally institute a process of peer review and malware sharing similar to what Brendan Dolan-Gavitt proposed in 2014. The lack of ground truth regarding threat actors combined with market incentives attached to nation state attribution claims and an industry that is reluctant to speak out against its own makes peer review an absolute necessity.