Attack Work Effort: Transparent Accounting for Software in Modern Companies
TL/DR: By Divine Providence, Cyber-ITL’s recently developed software risk scoring metric seems information-rich enough to usefully capture a modern company’s exposure to software. This transparency has far reaching economic consequences with respect to valuation, risk and market confidence. The CITL findings are consistent with my position that vulnerabilities are *created* by dynamics in adversarial environments, rather than inherent and discovered. Derived practical advice for buy-side (security offerings value proposition), SSDLC (3rd party closed software) and OSD (US NDAA section 1647) closes out this post.
Software is famously ‘eating the world’, ie modern companies are increasingly software-driven. 20th century accounting concepts have not kept up. Financial accounting is supposed to aid transparency for stakeholders (investors, clients, market signals), achieve visibility into internals in order to gauge a company’s present ‘health’ and future ‘viability’ (ratios such as debt management, liquidity, ROE, ROA, operating profit, inventory turnover etc. If you know nothing about financial accounting, spend $3 on this e-book).
Within that traditional framework, the modern company’s technology stack (software shorthand) is opaque: A company’s software make-up, its use and exposure is not adequately captured by classic balance sheets and Profit/Loss statements.
Distortions due to software opacity
This software opacity has far reaching consequences from company valuation (how valuable is IP if the company’s software substrate is insufficient for protection? 48% of customer exodus reported to follow a breach?) to investor confidence (can the company handle an adaptive attacker landscape? The cost-internalizing game-changer that is ransomware?) to calculating risk and insurance premiums.
Sarah and her husband Peter @dotMudge have been developing a software risk scoring scheme (binaries, not source code) with associated metrics. You should def. watch the video. As a service to the community, I screenshotted and uploaded the slides (Cyber ITL). There are many astounding nuggets in the talk (Anaconda-type visibility alone is worth an extended discussion, which I may do, bli neder)
This hardening line metric seems to be information-rich enough for economic arbitrage in 0day vulnerability markets (see his comments in video at around 7:45m). From my understanding, the hardening line correlates well with attacker work effort.
I suggest using this CITL hardening line metric to define a notion of software transparency based on attacker work effort. Just the static analysis component of this metric alone seem powerful enough to address residual (“unknown unknown”) software risk.
Being able to usefully define software in terms of attacker work effort may underpin medium- to long-term needed transparency revisions of financial accounting of modern software-driven companies. A regulatory US precedent is the Security Act of 1933 and the Security and Exchange Act of 1934 which led to the establishment of the Generally Accepted Accounting Principle (GAAP).
Increase in Work Effort as Measurable Value-Add for Infosec Offerings
While such accounting changes will likely lead to more efficient markets in general (like the 0day market, iff non-distorted), there is an additional benefit to measuring software in terms of attacker work effort: For some classes of infosec security services and product offerings, this allows for a quantifiable (even testable) value-add proposition. Presently, the vast majority of security companies cannot give robust quantitative estimates as to product/service effectiveness/resistance viz attackers. The best one can hope for, in my experience, are some red team tests.
The opaqueness may deliberate for it allows for marketing-driven pie-in-the-sky valuations, bubbles, and inevitable bursting when overblown assurances meet the ground truth of everyday threats.
I suggest we use the same metric to quantify the effectiveness of security control offering: By what factor does a given offering increase attacker work effort viz org’s protection goals x,y,z.
Such vulnerabilities are *created* not discovered as a function of adversarial / environmental drivers. Vulnerability density is no more an inherent feature than a diamond lattice is an inherent feature of carbon.
Three Pieces of Parting Practical Advice
Sarah and Peter’s software risk scoring scheme can assist other people/units/missions, as well:
Consumer Information Bulletin #1: If you are on the buy-side of security products for an org, ask for consumed proof pudding. When evaluating security offerings, ask the vendor whether an insurance company was willing to lower cyber-risk insurance premiums if their product was used. This can give you a rough value-add proxy for security offerings.
Consumer Information Bulletin #2: If you are on the SSDLC team of an org and have to evaluate closed-source 3rd party software, you were up to now mostly out of luck. This Cyber-ITL methodology (works with binaries, not source code), once released, will make evaluating such software possible and contribute to a unified framework. Your org’s risk people are going to love it, to0.
Consumer Information Bulletin #3: @DeptofDefense, the budget and timeline for your charge to evaluate cyber vulnerabilities in US military system is very tight and success under these conditions far from assured.
After inventorying, Sarah and Mudge’s hardening/line (attack work effort) is a faster, sounder, transparent and cheaper (open) way to rank severity, prioritize remediation efforts than big defense contractors are likely to bill you under any cost-plus scheme (CPFF/CPAF/CPIF etc). As an aside, may be instructive to check out the innovative procurement approaches at GSA 18F.
Twitter user @Aristot73 has unearthed an excellent NIST report from a July 2016 “Workshop on Software Measures and Metrics to Reduce Security Vulnerabilities” bemoaning the current metrology state (inexplicably lists UL’s CAP but not the Cyber-ITL in related work). Goal of the WS: “Come up with the best ideas to use metrics to dramatically reduce software vulnerabilities in three to five years”
Twitter user @scriptjunkie1 makes a valid point about transparency.
Work Effort: Ignored Dual of vulnerabilities
Attacker work effort is the virtually ignored dual to vulnerability attack surface and cyber threat assessments. If you do not believe me, have a look at this 1996-2016 survey of security metrics and identify the less than handful that attempt to do so — hint: every ten years).
For a good current example of taking work effort into account, see this BIRS talk estimating the cost of generic quantum pre-image attacks on SHA2–256 and SHA3–256 hash families.
My wife says I have to go to Thanksgiving. The Torah mandates that in all matters except for legal halachic matter a man has to defer to his wife. I’ll write more later.
Two last quick things:
- Not the first time Mudge made breakthroughs. I bet $ for $ his DARPA I2O Cyber Fast Track program produced the highest ROI in terms of practical software. Additionally, it gave underserved people (non-standard hacker types, w/o college degrees, alternative lifestyles etc) a real opportunity to show off their skills and goods. That’s my type of America.
The Cyber Fast Track (CFT) program sought revolutionary advances in cyber science, devices, and systems through low-cost, quick-turnaround projects. To achieve this, CFT engaged a novel performer base many of whom were new to government contracting. From August 2011 to April 2013 the program attracted 550 proposal submissions, of which 90 percent were from performers that had never previously worked with the government, and awarded 135 contracts.
2. I should have waited to see the presentation before opining. Ivan @4Dgifts was right. General lesson (re-)learned.