When software security flaws can fetch over a million dollars it is useful to examine why building secure software is so difficult.
All our work in security rests on these difficulties and this article aims to collect the specifics inherent in application security so followup articles can offer solutions. It is not meant to be defeatist.
Writing software is hard
Securing software is hard but even writing it somewhat correctly is difficult. Finding security bugs in a subcategory of finding bugs in software.
We know it is possible for us as human beings to write flawless software, it is just really hard and often not prioritized. An example optimized towards quality/zero bugs is NASA and the curiosity rover which is appropriate because the cost of a bug in that software is very high. Djbdns is another.
Testing shows the presence, not the absence of bugs
— Dijkstra in 1969 report
If we knew all the vulns we were searching for it would obviate the need to look in the first place.
The number of vulns in the 100,000 lines of code comprising service $foo is unknown ahead of time. The universe doesn’t give us a yardstick to measure against, there is no “you have found 16/27 vulns, keep looking!” progress bar.
While writing bug free code is difficult enough if we managed to accomplish that it its even harder yet to prove that it is bug-free. Proving a negative is hard. NIST has an awesome paper about security + formal verification which is the state of the art in this area.
The absence of evidence is not necessarily evidence of absence.
Even the process of measuring the full set of human-findable bugs in a codebase differs between who is looking, how, and what they had for lunch.
I ran a trial at Facebook where 10 security consulting companies audited the same code. Code my team had already carefully audited. All 10 found the same pool of shallow bugs (about half) but the remaining issues were all over the map, including one we ourselves had missed. Each person brings their own long tail of security knowledge to bear. Contrast this with something like performance (another attribute of “quality” in software) where it is trivial to measure progress.
Inconsistent is another way to say probabilistic. Like mining for gold finding security bugs involves expending effort for a possibility, not a promise, of discovery.
We don’t always get to know who our competition is.
It is not FUD to say nations may actively be trying to do this.
I sometimes think of product security as a race. A race for us to find the lurking security flaws before someone else does who would use them for harm.
Consistently judging the severity of a vuln is hard. We have this discussion every week at work when rewarding bug bounty issues. Is one really good account takeover bug worth 10 less critical bugs? 100?Its hard to agree on this among any two security engineers but is a prerequisite to optimizing time investment in the areas that will benefit holistic “security” the most. CVSS is currently the best answer we have to grade the criticality of bugs.
Despite best efforts and big budgets we will never be able to deeply review the whole codebase for security flaws. If we somehow did by the time we completed it will have changed significantly. This isn’t fatalism but fact.
Security is often on a longer timeline than you expect. Its possible to harm security with a decision whose result wont arrive for years.