Bayesian Cyber Risk Quantification With Industry-Specific Models

The problem of cyber risk quantification for the security, finance, and insurance ecosystems

Tower Street
6 min readJun 20, 2019

--

Cyber risk modeling consistently presents researchers with two major challenges; the data available for modeling is incomplete and sparse, and cyber risk requires rich models to capture the potential impact of events that occur in different industries.

Cyber risk modeling with small, sparse data requires modeling concepts from advanced statistics rather than deep neural networks trained on large, dense data. Today’s cyber risk modelers often work around the data issue by using simple average-cost-per-record models for data breach losses, despite the well-known problems with this approach [1]. Another approach is to limit large variance in the data by reducing the scope of study to a specific subset of events, e.g. Ponemon is focusing on small breaches only (less than 100k records affected).

Companies are forced to disclose consumer data breaches, but there is a lack of metadata about the incidents; especially around the state of the company’s security at the time of the incident. The companies are not compelled to disclose how the attack happened or the financial impact. Most industries are more concerned about incidents resulting in business interruption. Until recently, the companies didn’t have to make these events public, even when the interruption was visible to end users. To address these issues, we have built a dataset that includes which CIS security controls were affected and what losses the company suffered.

Being industry-specific because you need the granularity

Although the folks modeling cost per record, per industry might be missing richness in their models, they are right to look at the different properties on a per industry basis. Different industries have different business models, different security threats against their assets and profits, and different kinds of downstream losses after cyber incidents.

Our approach leverages industry-specific scoping and subject matter expertise (SME). Careful analysis of the regulatory landscape, historical incidents, and financial expertise allowed us to effectively map out potential losses that can occur after a cyber incident.

Being industry-specific requires strong SME time investment; a full-time, two-person SME team (one for accounting, one for security) will invest a month or more building out a single industry-specific loss taxonomy. We’ve developed what we call the loss graph, in which our internal SMEs worked with additional industry experts to map out more than 50 different loss scenarios (and growing) that are now included in the loss graph and are grouped in the following 9 loss areas:

  • Replacement or Upgrade of Faulty Controls
  • Determination of Cyber Event Extent Review of Necessary Legal & Regulatory Action
  • Notification, Credit Monitoring, Credit Restoration
  • Regulatory & Vendor Fines & Penalties
  • Third Party Litigation & Damages
  • Loss of Gross Profit
  • Contributing Costs
  • Extraordinary Costs

We are working with the security community in a similar way and we build pre-configured security assessment tooling stets for industry-specific streamlined onboarding into security risk quantification and underwriting.

Being Bayesian because you need the practicality

At Tower Street, we utilize Bayesian GLMs for the frequency model and a set of calibrated loss distributions for the severity/loss model. These two models in conjunction represent our end-to-end cyber risk quantification model. There are two important pieces of missing historical information that are fundamental for cyber risk modeling and lead to considering a Bayesian approach; information about the historical security posture of companies, and losses that were incurred after a cyber incident.

The practical way to attack this problem of missing security posture and loss information is to build a Bayesian model, which lets you use as much historical data as you can and initialize the rest of the model based on elicitation from a panel of experts. Then you get out there and start underwriting based on your model. These models allow: (a) expert initialization in the form of informed priors, (b) new data in form of evidence can be used later to update the model and obtain a new posterior distribution — you can make it more empirical over time as you observe more security assessments and downstream losses.

The process of converting expert knowledge into numerical estimates is called expert knowledge elicitation and there is a big body of research focused on designing the most effective elicitation methodology for the given family of models [2]. Tackling a big problem such as cyber risk is far too complex for any single expert, so we reduce the scope of the analysis to separate elements for which expert inputs can be incorporated effectively.

There have been many books written on this topic; probably the most well known is Douglas Hubbard’s book How to Measure Anything (and the follow-up How to Measure Anything in Cybersecurity Risk).

The Pain of Historical data

Historical community-driven breach datasets are incomplete and sparse, but at the same time, there is a lot of publicly available information that can be aggregated to fill these holes and provide a better foundation for modeling. Most of the publicly available data come from attorney general websites where companies are compelled to disclose consumer data breaches, but also data aggregators. However a lot of information about the incident is typically lacking, and aggregators often only portray data breaches and not other types of attacks.

We’ve written up the adventure of building a breach dataset with automated aggregation and human annotation. We’ve built a human annotation toolchain from scratch with a custom app just for the annotators, layered on top of a scalable scraping system and data aggregation pipeline with a bit of Natural Language Processing to ensure quality aggregation of records.

Using this approach of aggregation and annotation, we gain two major advantages — increasing the number of records in one dataset and the amount of metadata in each record. The largest currently publicly available dataset has around 9.5K incidents, while we have gathered information about more than 30K incidents. We also annotate better metadata about security controls and losses which allows us to perform richer modeling.

The Rise of Inside-out Assessments, Active Security Testing and GRC Platforms

The most easily obtainable security data comes from outside-in assessments. These are insufficient for larger companies, where the infrastructure complexity and deception tactics leave outside-in assessment with a more limited view. Based on our experience, the predictive power of this data decreases for larger and more complex companies.

Our security team, advisors, and insurance partners also cautioned us from the beginning that outside-in data is of limited or no value for larger companies, and that the only real security posture information comes from legitimate inside-out testing of controls.

Most insurers today rely on a simple security questionnaire when assessing a client’s security posture. While asking some questions is good, it’s important to test the controls and conduct a gap analysis. This missing feedback loop in security is being filled these past few years by terrific new startups that can continuously test the effectiveness of a company’s security program and the state of its security controls — we’d especially draw your attention to SafeBreach and AttackIQ.

Increased compliance pressure is forcing companies to undertake regular security audits that all usually boil down to a single questionnaire. Companies trying to escape the “excel hell” are quickly moving towards more mature solutions in the form of GRC (e.g. LogicGate, Reciprocity, etc.) and 3d-party risk assessment platforms (CyberGRX). This again provides a great opportunity for streamlining more rigorous risk assessment by ingesting existing questionnaires, and we’ve already demonstrated a 20-minute enterprise-grade risk assessment by ingesting existing questionnaires and quickly filling in anything else needed for the risk quantification.

References

[1] Edwards B., Hofmeyr S., Forrest S. (2015) Hype and heavy tails: a closer look at data breaches. Workshop on the Economics of Information Security.[2] O’Hagan, A. et al. (2006) Uncertain Judgements: Eliciting Experts Probabilities, book.

--

--

Tower Street

Robust industry-specific cyber insurance for large corporates; Fast because it leverages existing security assessments and questionnaires