Formalizing Cyber Threat Intelligence Planning: Part V

SecureSet
Command Line
Published in
8 min readOct 24, 2018

--

Part 5 of 10: Intelligence Preparation of the Environment (IPE); Evaluate the Adversary by Chris Ruel, Full-Time Instructor at SecureSet Denver campus.

“The enemy gets a vote.” — Military Maxim

In Step 3, the analyst determines the degree to which the adversaries pose a threat. The analyst does this by identifying and evaluating an adversary’s characteristics to create threat models. These models, when combined with the terrain analysis conducted in step 1 and 2 will provide the insight to determine the adversary’s Courses of Action (COA) in Step 4. Step 3 — Evaluate the Adversary, can be broken down into five sub-tasks:

i. Identify Adversary Characteristics

ii. Create Adversary Model

iii. Identify Adversary Capabilities

iv. Adversary Template

v. Adversary Capabilities Statement

If done properly, Adversary COAs developed in the next step will reflect what the adversary is able to do in similar situations. Failure to perform this step properly can result in a lack of intelligence for planning, surprise actions by the adversary, or wasted resources directed against a threat that doesn’t exist.

“Threat” and “adversary” are often used interchangeably. For the purposes of these posts “threat” should be taken to mean the possibility of trouble, danger, or ruin; Whereas “adversary” should be understood as the thinking actor behind the threat. In other words, the adversary is what poses the threat. A threat can also be understood by the following equation:

Threat = Capability + Intent + Knowledge

Capability: Possessing the tools and resources to execute a given operation.
Intent: Possessing the desire or motivation to execute a given operation.
Knowledge: Possessing the skills proficiency to execute a given operation.

Knowledge is disproportionately more important than the other two factors when considering what makes an adversary capable of posing a threat. This is mainly due to the fact that most of the tools and resources needed to carry out an attack are easily accessible and free (think about the suite of tools available in Kali Linux). In Part 4 — Describe the Effects, we discussed known adversaries. A method to narrow down the list of potential adversaries is to measure them against the definition of a threat. Adversaries that don’t meet all three can be discarded.

i. Identify Adversary Characteristics: The military model looks at 11 broad characteristics (composition, disposition, strength, combat effectiveness, doctrine and tactics, support and relationships, electronic technical data, capabilities and limitations, current operations, historical data, and miscellaneous data). To simplify and make the characteristics more relevant, I’ve narrowed it down to seven.

  • Composition: The identification and organization of an adversary. Composition should describe how an entity is organized and equipped, as well as how it is controlled. Line-wire diagrams are often useful. Look at this generic example from Kaspersky of a typical Russian-speaking cybercriminal group.

For another example look at Mandiant/FireEye assessment of APT1.

  • Effectiveness: Describes the ability and persistence of the adversary. How good are they at achieving their goals? With the difficulty of attribution and the covert nature of cyber activities in general, it can be difficult to judge effectiveness. As a result, effectiveness is often assumed or inferred. An analyst can examine factors such as: manpower (APT1 is believed to have hundreds or even thousands of “hackers”), training, efficiency/quality of leadership, past performance, morale, discipline, or even national character.
  • Doctrine and Tactics: Doctrine refers to the adversary’s accepted organization and employment principles, while tactics refer to the adversary’s conduct of operations. Not a lot of formal doctrine exists in the cyber domain (outside of Nation State level cyber warfare). Looking at previous attack vectors provided in reports from CTI companies like FireEye, AlienVault, and ThreatConnect are a decent substitute.
  • Support and Relationships: The adversary’s adoption of a COA will depend on its support system and relationships. Support systems could include sources of funding and willingness on the part of leadership to accept political risk. Relationships refers to the dynamics between the key stakeholders. Russian cybercriminals, for example, can operate with impunity because there is no risk of retribution (as long as their activities occur abroad). With knowledge of these factors, analysts can better evaluate effectiveness and tactics.
  • Capabilities and Limitations: Military doctrine refers to Capabilities (Big C) as broad COAs and supporting operations that the enemy can take to achieve its goals and objectives. A better definition is the one used in The Diamond Model of Intrusion Analysis, where capabilities (little c) are “the tools and/or techniques of the adversary and include[s] all means to affect the [target].”

An example of a Diamond Model from ThreatConnect:

Limitations can include time available and a mandate to avoid collateral damage or discovery in addition to conventional ones such as funding and access to technology.

  • Modus Operandi (MO): An Adversary’s MO is the aggregate of their past performance. It differs from tactics in that it uses historical examples and is not target-agnostic.
  • Tactic: Adversary uses rudimentary phishing campaigns to spam multiple accounts in the hope of getting a victim to provide login credentials on a hoax site.
  • MO: Adversary uses rudimentary phishing campaigns targeting government security contractors operating under NAICS Code 561612 with the goal of obtaining login credentials to extract client information.
  • Miscellaneous Data: This is a catch all for supporting information that can add context to the analysis. Miscellaneous data can include personality profiles (this is rarely ever known), cultural idiosyncrasies, or internal processes and politics.

ii. Create Adversary Model: The Adversary Model attempts to accurately portray how the adversary normally conducts operations based on how they have acted in the past, combined with the knowledge obtained in the previous step. Lots of models for existing adversaries have already been created, but if faced with new adversary, an analysts may have to build them from scratch. The Adversary Model is a three part framework designed to assist in the development of the Situational Template in Step 4

  • Convert Adversary Doctrine/Tactics to Graphics: Adversary templates graphically portray how the adversary might utilize its capabilities to perform the functions required to accomplish its objectives. An excellent example of this is the Activity Attack Graph described in the Diamond Model of Intrusion Analysis. The Activity Attack Graph has the added benefit of being organized along the seven phases of the Lockheed Martin Cyber Kill Chain.
  • Describe the Adversary’s Options: The options should be a description of the Adversary’s preferred tactics and listed to provide more information to the graphics. The options should be listed by phase and mention what happens if the operation succeeds or fails at each one. Timelines, if feasible should be included. This prevents the model from becoming a snapshot in time.
  • Identify High Value Targets (HVT): The best way to identify HVTs is with Center Of Gravity (COG) Analysis and a CARVER (Criticality, Accessibility, Recuperability, Vulnerability, Effect, Recognizability) matrix. I will cover both of these topics in depth in Post 7. For more information on COG Analysis, check out this Pocket Guide by the RAND Corporation and the article Think Like a Green Beret: The CARVER Matrix, by Mark Miller, or Using CARVER To Identify Risks and Vulnerabilities by RedTeams.net.

iii. Identify Adversary Capabilities (Big C): “Adversary Capabilities are options and supporting operations that the adversary can take to influence [the outcome of] friendly missions.” Start with the full set of Capabilities and narrow them down based on factors of MATTAC (Mission, Adversary, Terrain, Time, Assets, Customer). Avoid overstating the adversary’s capabilities. You want the model as realistic as possible. The list that the analyst prepares should be in the form of concise statements:

“The adversary is capable of prolonged DDOS attacks”

“APT1 can conduct around-the-clock offensive operations against multiple targets”

Or if you’re on the Red Team and the Adversary is the Blue Team defenders:

“Adversary network uses Deep Packet Inspection and advanced heuristics to detect malware”

iv. Adversary Template: The Adversary Template is a graphic that depicts the preferred deployment pattern of the adversary when not constrained by the environment. It shows how the adversary prefers to use its Capabilities to accomplish its objectives. Using the Diamond Model, a close approximation would be an Activity Thread.

v. Adversary Capabilities Statement: The statement is a brief narrative of the preferred tactics the adversary will employ to accomplish its objectives. The narrative should go by phases.

Developing Adversary Models can be a laborious process. Once the models are built; however, they require only periodic reviews and updating. The better the models, the better prepared the analysts will be when conducting Step 4 — Determine Adversary COA. Knowing the adversary’s COA is essential if decision makers are to come up with an effective strategy.

Something to consider is the major difference between the physical domain and the cyber domain. In the cyber domain, you may never be sure who the enemy is. As a result, when developing COAs, you may elect to treat each COA as a different potential adversary. Remember that even the best models can be wrong because the adversary, after all, gets a vote.

References:

  1. The Diamond Model of Intrusion Analysis,
  2. ATP 2–01.3 — Intelligence Preparation of the Battlefield/Battlespace
  3. “Use offense to inform defense. Find flaws before the bad guys do” Winterfeld — SANS Institute
  4. FM 34–130 — Intelligence Preparation of the Battlefield
  5. https://redteams.net/redteaming/2013/using-carver-to-identify-risks-and-vulnerabilities
  6. https://loadoutroom.com/13821/green-berets-and-the-carver-matrix/
  7. https://www.rand.org/content/dam/rand/pubs/tools/TL100/TL129/RAND_TL129.pdf
  8. https://www.threatconnect.com/blog/diamond-dashboard-hunting-your-adversaries/
  9. https://www.fireeye.com/content/dam/fireeye-www/services/pdfs/mandiant-apt1-report.pdf
  10. https://securelist.com/russian-financial-cybercrime-how-it- works/72782/

If you’re just joining us for this series, you can catch-up by reading Formalizing Cyber Threat Intelligence Planning: Part I, Part II, Part III and Part IV.

Christopher Ruel is a Full-Time Instructor at the SecureSet Denver Campus. He teaches Cyber Threat Intelligence as well as Strategy and Analysis. Chris is an Army Special Forces Officer with years of operational experience overseas. He has also worked closely with the Intelligence Community in pursuit of US strategic objectives. He has earned a BA in history, as well as an MBA with a concentration in Business Analytics.

--

--

SecureSet
Command Line

The #cybersecurity bootcamp with campuses in #Denver and #CoSprings. A @flatironschool. Educating the next generation of cybersecurity professionals.