Biden cyber strategy: Secure software can no longer be voluntary
Build better software.
If President Joe Biden has his way, that will no longer be a request, a plea, or even an exhortation. It will be a mandate. Or at least a warning with a legal stick behind it.
Among the five major “pillars” of the president’s much-anticipated National Cybersecurity Strategy, released earlier this month, is a call to “shift liability [for cyberattack damages] onto those entities that fail to take reasonable precautions to secure their software […] Companies that make software must have the freedom to innovate, but they must also be held liable when they fail to live up to the duty of care they owe consumers, businesses, or critical infrastructure providers.”
There are other elements of the strategy, including expanding of efforts to disrupt cybercriminal activity, modernizing the federal digital infrastructure, and improving global cooperation to achieve “shared goals.”
But the proposed liability shift would clearly be the most disruptive to an industry long accustomed to taking refuge in terms-of-service (TOS) agreements that basically say (buried in pages of dense legalese) if you buy or use their software product, you also own whatever risk comes with it.
The Biden strategy, if implemented, would end that. It sets a priority on critical infrastructure but ultimately envisions the liability shift spreading throughout the digital world. As the document puts it, “While voluntary approaches to critical infrastructure cybersecurity have produced meaningful improvements, the lack of mandatory requirements has resulted in inadequate and inconsistent outcomes.”
Long past time
That’s a clear call for government regulation of the software industry. Which is just fine with many high-profile cybersecurity experts, who say it’s not only about time, but long past time. Why shouldn’t makers of software products be held liable for flaws in those products, as is the case in most other industries?
While experts agree that consumers should follow security directives like using complex passwords and not clicking links that come from anyone they don’t know, they also agree that the average consumer shouldn’t be expected to be the front line of software security any more than they should be expected to know the inner workings of the safety systems of a car or a major appliance.
“If the only thing keeping something secure is the ability of a user to recognize the legitimacy of an email created by a criminal, then there really is no point in investing in cybersecurity,” said Tim Mackey, head of software supply chain risk strategy within the Synopsys Software Integrity Group.
“Victim blaming is easy. Taking responsibility for the design, implementation, and configuration defaults for software that your company creates is harder. There should be no single preventable point of failure, and if there is a point of failure, the rest of the system should be able to minimize the impact of the breach.”
Bruce Schneier, blogger, author, and chief of security architecture at Inrupt, noted on his blog last week that, “One of the [strategy] provisions getting the most attention is a move to shift liability to software vendors, something I’ve been advocating for since at least 2003.”
Paul Rosenzweig, founder of Red Branch Consulting and a senior advisor to The Chertoff Group, called it “truly revolutionary” on the Lawfare blog.
“One has to admire the ambition,” he added. “Kudos to the Biden administration for putting the issue on the table,” given that for most of the history of the internet, “proposing liability for badly written code or poorly implemented security measures has been the third rail of cybersecurity policy. Touch it and you die.”
That doesn’t make it a done deal, of course. Even if ambitious proposals don’t generate lethal blowback, they also don’t guarantee that anything substantive will happen. A strategy document doesn’t even have the force of a presidential executive order, which in turn doesn’t have the long-term power of congressional legislation since a future president could simply revoke it.
Ambitious, but …
And a number of security analysts noted that while the 39-page document is long on worthy goals, it is short on specifics of how to deal with the hard realities of domestic politics and a global internet that includes dozens of hostile nation states and thousands to millions of bad actors.
The think tank Atlantic Council noted in an analysis of the strategy that while in often “florid prose” it provides the “much-needed beginnings of an ambitious shift in U.S. cybersecurity policy, it often falls short on implementation details and addressing past failures.”
Katie Nickels, nonresident senior fellow at the Atlantic Council’s Cyber Statecraft Initiative and one of 11 contributors to the council’s analysis, noted that changing the software liability landscape will require congressional legislation — which is compared to making sausage for a reason. “Establishing any semblance of a definition of ‘reasonable precautions’ is going to be a monumental challenge,” she wrote. “While secure coding practices exist, they are far from standardized across the range of software products and services that exist.”
But Mackey said a presidential strategy document shouldn’t be expected to get into how to execute on goals. “Anyone thinking that a strategy defined by government executives is going to be anything other than aspirational is misguided,” he said. “Leadership at that level needs to set a goal that we can agree represents a desired outcome and then create the climate for success.”
Indeed, most experts agree that government’s role is to say what to do — “stop polluting,” for example — but leave how to do it up to the private sector.
So what are the chances of creating a climate for success? Even that could be a heavy lift, for multiple reasons.
One pillar calls for overhauling the grievously outdated federal digital infrastructure — a laudable goal but massive task. “I don’t think there is any disagreement that raising the bar through modernization is easily one of the most effective defensive measures,” wrote Marc Rogers, CSO of Qnetsecurity, in the Atlantic Council analysis. “However […] the scope of this is somewhat akin to boiling the ocean.”
Another hurdle is politics. Some of the biggest tech companies in the world, while they all rhetorically support better software security, are unlikely to want the feds telling them they have to gut their TOS agreements. As numerous experts say, when it comes to an initiative like this, “the devil is in the details.” And big tech has a large lobbying presence in Congress.
Third, there are no deadlines for any of these pillars or their components to be completed. Mackey said that this is standard. “Implementation of any deadline is outside the purview of a U.S. president,” he said, adding that while Biden’s May 2021 executive order on cybersecurity did have many deadlines, none directly ordered industry to do anything. “It told heads of agencies to perform specific actions and create processes,” he said.
But that means, especially if legislation is required, those processes could extend beyond the term of any president.
Finally, there is the reality that cybercrime adapts and evolves rapidly, while government doesn’t. It’s not that regulation isn’t warranted, but setting and maintaining security standards for software products is not as simple as mandating seatbelts or airbags for cars. You don’t have to patch or update your seatbelt every couple of days or weeks, because highway risks don’t evolve like cyberattacks do.
Don Davidson, director of Cyber Supply Chain Risk Management programs at Synopsys, added that setting security standards for critical infrastructure comes with its own complexities, given that there are 16 sectors that do very different things.
“Each sector has its own unique risk tolerances,” he said. “So one approach might be to establish a few ‘assurance levels,’ with each sector defining how it will manage risk to get to those agreed assurance levels.”
Feasible and achievable
None of those difficulties mean that better software security is a pie-in-the-sky goal, however. It’s probably the most feasible of any of the pillars in the strategy — it’s certainly more achievable than global cooperation on “responsible state behavior in cyberspace.”
The problem with bad software is not that organizations are sitting around wishing they could make it better if only they had the tools. The tools and services are available now. While nothing can make software products perfect, there is no good reason for them to hit the market without having been rigorously tested throughout the software development life cycle (SDLC).
An effective SDLC testing protocol that delivers software of high quality and security is now well-established.
- Architecture risk analysis and threat modeling can help eliminate design flaws before a team starts to build an application or any other software product.
- Static, dynamic, and interactive application security testing can find bugs or other defects when code is being written or assembled, when it is running, and when it is interacting with external input.
- Software composition analysis (SCA) can help developers find and fix known vulnerabilities and potential licensing conflicts in open source software components.
- Fuzz testing can reveal how the software responds when it is hit with malformed input.
- Penetration testing is designed to mimic hackers, to find weaknesses that remain before software products are deployed.
Nor is there any technological barrier preventing organizations from creating software Bills of Materials (SBOMs), which help organizations know if they’re using a component that needs to be updated. An automated SCA tool can help with that. And Biden’s May 2021 executive order on cybersecurity already calls for federal agencies to be banned from purchasing any software products that don’t have SBOMs. That ban has not yet been implemented but it is reportedly in the works.
Another incentive to build and maintain better software is that the strategy proposes a “safe harbor” from liability for organizations that get breached but can show that they met security standards when developing and maintaining their software products and services.
That also needs much more detail. Herb Lin, a research scholar at Stanford University, wrote on the Lawfare blog that “the nature, scope, and extent of such liability all remain to be established.”
But, as Rosenzweig noted, it’s all now on the table, from the office of the president. You can’t get much higher than that.