Quantitative Risk Assessment Using FAIR
Risk is something that must be considered in the organization. Besides developing the organization through research and development, marketing, and another aspect, risk must be put on the agenda. Basically, because the risk is not handled, it will invite chaos. Especially in this digital era when technology becomes the business’s backbone.
When technology becomes the business’s backbone, it raises the responsibility for businesses to start considering technology-related risks. But as people said, easier said than done, because, technology changes rapidly. At the same time, the business’s controls of technology are not mature enough to capture all the technology-related events to be analyzed for risk assessment.
Looking at the currently existing risk assessment framework before the framework we are going to talk about, there are 2 frameworks available: https://www.fairinstitute.org/frequently-asked-questions
- Checklist methodologies (e.g., PCI, ISO, BITS, etc.). This kind of methodology helps the organization to understand the gap they have in their controls. It can also help to do benchmark with other organizations or against the checklist itself.
- Capability Maturity Model (CMM) methodologies (e.g., SSE-CMM). This kind of methodology helps the organization to evaluate the quality of the process or for setting goals.
Unfortunately, both of them do not provide the risk assessment on using numbers to help management decide the priorities.
Talking About FAIR
Now there is a solution for that need, the FAIR framework.
Before we dig down about FAIR, FAIR has an ontology that we must understand in order to understand how FAIR helps us quantify risk.
We can see that risk is constructed by 2 things, Loss Event Frequency (LEF) and Loss Magnitude (LM). LEF is the frequency of a loss event to happen and loss magnitude is the impact in terms of money. This information then being displayed using some charts like heatmaps or scatterplot. From there, we can easily examine the information and make certain decisions.
If we can not decide the value of an object like LEF directly, we can derive the object’s value from their underlying components. For example, if we can not define the LEF value directly, we can derive the LEF value from threat event frequency (TEF) and vulnerability. Same as LEF, if we can not derive the value of TEF or vulnerability directly, we can derive those values from their underlying components according to the ontology.
Keep in mind, in the nutshell, the risk is just the combination of how likely is the loss event happen and the cost of the loss. As long as we can decide both values, we are good to go.
Risk Analysis Process
Okay, a glance of FAIR has been introduced, now we are going to use the concept in the analysis.
The general flow is as follows. First, we define the scenarios regarding the event. After we define the scenarios, we will use and decide the values from the ontology above, that is the FAIR factors. After that, we consult the client whether our calculation makes sense to the working environment. For example, the employees do not have any cybersecurity knowledge and have never been involved in cybercriminal activities, hence we can assume that the loss event frequency is not 100 events per day or 36500 events per year. That is what it means by expert estimation. It is important because we will use the Monte Carlo engine to actually simulate all the scenarios we have prepared before and from the results for every scenario, we can decide the urgency of the possible risk.
We can build scenarios using the asset, threat community, threat type, threat effect.
Keep in mind, after we create the scenarios, there is a chance that the list can be slimmed down using a deeper analysis such as whether the capability is sufficient or the asset is that valuable hence can encourage malicious activities and many other examples.
Fair Factors & Expert Estimation
After defining the scenario, we will define the FAIR factors and expert estimations. This is when the numbers are added up into the scenario. FAIR factors estimation includes the frequency of LEF, TEF, or difficulties if necessary. When expert estimation comes to play, this is when the subjectivity needs to be objective. Experts need to estimate the value of the FAIR factors such as the LEF, TEF, LM, etc. Yes, indeed it seems like this examination is qualitative since the value is derived from expert opinion. But in this framework, there is an assumption that the experts are actually experts that can provide an accurate with sufficient precision. But, even “the experts” is not that “expert”, this book also teaches some ways to legitimate or at least increase the confidence of the value.
We are working in a lot of uncertainty because of the lack of data and the immature environment. That is why we need range. Always use ranges when we decide a value because the point of doing the measurement is not to get the actual value but to reduce the uncertainty and the pool of numbers.
Before we move to the next step, do not forget to DOCUMENT YOUR ASSUMPTION for further accountability.
Monte Carlo Engine
After the expert estimation, we will provide the input into the Monte Carlo simulation using the computer application in order to get the information regarding the risk. The Monte Carlo engine helps us to simulate every possible scenario from the numbers we supplied.
From this result, we can get the potential loss cost whether the worst-case scenario, the most likely scenario, or the best-case scenario. We can also see a lot of information within the analysis for every scenario we can think about. From this information, we can compare each scenario to another and from that, we finally can decide our priorities with a stronger basis.
That is a glance of the FAIR risk assessment framework fundamental theory. I learnt a lot when I read this book. You can check their official website (https://www.fairinstitute.org/). Thank you for your time, I hope it helps whether a little or a lot. To GOD be all the glory! Soli Deo Gloria.