Attack Work Effort: Transparent Accounting for Software in Modern Companies

Daniel Bilar
6 min readNov 24, 2016

--

TL/DR: By Divine Providence, Cyber-ITL’s recently developed software risk scoring metric seems information-rich enough to usefully capture a modern company’s exposure to software. This transparency has far reaching economic consequences with respect to valuation, risk and market confidence. The CITL findings are consistent with my position that vulnerabilities are *created* by dynamics in adversarial environments, rather than inherent and discovered. Derived practical advice for buy-side (security offerings value proposition), SSDLC (3rd party closed software) and OSD (US NDAA section 1647) closes out this post.

Software is famously ‘eating the world’, ie modern companies are increasingly software-driven. 20th century accounting concepts have not kept up. Financial accounting is supposed to aid transparency for stakeholders (investors, clients, market signals), achieve visibility into internals in order to gauge a company’s present ‘health’ and future ‘viability’ (ratios such as debt management, liquidity, ROE, ROA, operating profit, inventory turnover etc. If you know nothing about financial accounting, spend $3 on this e-book).

Within that traditional framework, the modern company’s technology stack (software shorthand) is opaque: A company’s software make-up, its use and exposure is not adequately captured by classic balance sheets and Profit/Loss statements.

Technology stack of modern software-driven companies. You’ll be very hard-pressed to find anything in traditional accounting that reflects tech stack induced health, risk and predictive viability of such a company. Pic from slide 55 from NIST SP500–320 with thanks to @Aristot73 for sending this my way :D

Distortions due to software opacity

This software opacity has far reaching consequences from company valuation (how valuable is IP if the company’s software substrate is insufficient for protection? 48% of customer exodus reported to follow a breach?) to investor confidence (can the company handle an adaptive attacker landscape? The cost-internalizing game-changer that is ransomware?) to calculating risk and insurance premiums.

Sarah and her husband Peter @dotMudge have been developing a software risk scoring scheme (binaries, not source code) with associated metrics. You should def. watch the video. As a service to the community, I screenshotted and uploaded the slides (Cyber ITL). There are many astounding nuggets in the talk (Anaconda-type visibility alone is worth an extended discussion, which I may do, bli neder)

Slide 16 in the BH 2016 presentation Measuring adversary costs to exploit commercial software (CILT)

This hardening line metric seems to be information-rich enough for economic arbitrage in 0day vulnerability markets (see his comments in video at around 7:45m). From my understanding, the hardening line correlates well with attacker work effort.

This hardening line application apparent ability to arbitrage the 0day market is remarkable enough that it got me thinking and writing this post.
I say possibly, because BitSight type companies together with Kenna Sec type companies may provide in time another proof pudding. I’ve heard of beginnings. Another post sometimes.

I suggest using this CITL hardening line metric to define a notion of software transparency based on attacker work effort. Just the static analysis component of this metric alone seem powerful enough to address residual (“unknown unknown”) software risk.

Being able to usefully define software in terms of attacker work effort may underpin medium- to long-term needed transparency revisions of financial accounting of modern software-driven companies. A regulatory US precedent is the Security Act of 1933 and the Security and Exchange Act of 1934 which led to the establishment of the Generally Accepted Accounting Principle (GAAP).

Increase in Work Effort as Measurable Value-Add for Infosec Offerings

While such accounting changes will likely lead to more efficient markets in general (like the 0day market, iff non-distorted), there is an additional benefit to measuring software in terms of attacker work effort: For some classes of infosec security services and product offerings, this allows for a quantifiable (even testable) value-add proposition. Presently, the vast majority of security companies cannot give robust quantitative estimates as to product/service effectiveness/resistance viz attackers. The best one can hope for, in my experience, are some red team tests.

The opaqueness may deliberate for it allows for marketing-driven pie-in-the-sky valuations, bubbles, and inevitable bursting when overblown assurances meet the ground truth of everyday threats.

I suggest we use the same metric to quantify the effectiveness of security control offering: By what factor does a given offering increase attacker work effort viz org’s protection goals x,y,z.

As a personally satisfying corollary, the CITL findings are consistent with my long-held heretical position re vulnerabilities in adversarial environments

Such vulnerabilities are *created* not discovered as a function of adversarial / environmental drivers. Vulnerability density is no more an inherent feature than a diamond lattice is an inherent feature of carbon.

Three Pieces of Parting Practical Advice

Sarah and Peter’s software risk scoring scheme can assist other people/units/missions, as well:

Consumer Information Bulletin #1: If you are on the buy-side of security products for an org, ask for consumed proof pudding. When evaluating security offerings, ask the vendor whether an insurance company was willing to lower cyber-risk insurance premiums if their product was used. This can give you a rough value-add proxy for security offerings.

Consumer Information Bulletin #2: If you are on the SSDLC team of an org and have to evaluate closed-source 3rd party software, you were up to now mostly out of luck. This Cyber-ITL methodology (works with binaries, not source code), once released, will make evaluating such software possible and contribute to a unified framework. Your org’s risk people are going to love it, to0.

Consumer Information Bulletin #3: @DeptofDefense, the budget and timeline for your charge to evaluate cyber vulnerabilities in US military system is very tight and success under these conditions far from assured.

After inventorying, Sarah and Mudge’s hardening/line (attack work effort) is a faster, sounder, transparent and cheaper (open) way to rank severity, prioritize remediation efforts than big defense contractors are likely to bill you under any cost-plus scheme (CPFF/CPAF/CPIF etc). As an aside, may be instructive to check out the innovative procurement approaches at GSA 18F.

Reactions

Twitter user @Aristot73 has unearthed an excellent NIST report from a July 2016 “Workshop on Software Measures and Metrics to Reduce Security Vulnerabilities” bemoaning the current metrology state (inexplicably lists UL’s CAP but not the Cyber-ITL in related work). Goal of the WS: “Come up with the best ideas to use metrics to dramatically reduce software vulnerabilities in three to five years”

David Slater’s presentation is worth pondering for a pessimistic contrarian view. He raises a similar point I often make about metrics: You need to be make very clear (to clients, management) how your numbers are to be interpreted. For many reasons, people prefer and will expect ratio-type metrics, which until Cyber-ITL came along seemed out of reach.

Twitter user @scriptjunkie1 makes a valid point about transparency.

CITL release schedule from slide 45

Work Effort: Ignored Dual of vulnerabilities

Attacker work effort is the virtually ignored dual to vulnerability attack surface and cyber threat assessments. If you do not believe me, have a look at this 1996-2016 survey of security metrics and identify the less than handful that attempt to do so — hint: every ten years).

Cryptanalysis is a notable exception, as well as password makeup ‘sweet spot’. Newer: “moving target defense” by value proposition necessity see eg Evans 2011. Not to pick on anyone in particular:

For a good current example of taking work effort into account, see this BIRS talk estimating the cost of generic quantum pre-image attacks on SHA2–256 and SHA3–256 hash families.

Under ideal quantum computing black box model, Grover’s algorithm would give you quadratic speedup over exhaustive search. The researchers wanted to estimate a better bound for costs of using Grover under a particular parametrization of a cryptosystem. Turns out that for SHA2/3 256 bit hash function, such a QC would need 2¹⁶² operations, rather than the ideal 2¹²⁸ ops. The 2³⁴ or 17 billion work factor overhead is due to state preparation costs on a error-correcting surface code based quantum computer.

My wife says I have to go to Thanksgiving. The Torah mandates that in all matters except for legal halachic matter a man has to defer to his wife. I’ll write more later.

Two last quick things:

  1. Not the first time Mudge made breakthroughs. I bet $ for $ his DARPA I2O Cyber Fast Track program produced the highest ROI in terms of practical software. Additionally, it gave underserved people (non-standard hacker types, w/o college degrees, alternative lifestyles etc) a real opportunity to show off their skills and goods. That’s my type of America.

The Cyber Fast Track (CFT) program sought revolutionary advances in cyber science, devices, and systems through low-cost, quick-turnaround projects. To achieve this, CFT engaged a novel performer base many of whom were new to government contracting. From August 2011 to April 2013 the program attracted 550 proposal submissions, of which 90 percent were from performers that had never previously worked with the government, and awarded 135 contracts.

2. I should have waited to see the presentation before opining. Ivan @4Dgifts was right. General lesson (re-)learned.

--

--

Daniel Bilar

Cyber, science, algorithmist, systems, varia. Sa paned mu tsentrifuugi kiiremini pöörlema כי כל־העמים ילכו איש בשם אלהיו ואנחנו נלך בשם־יהוה אלהינו לעולם ועד