Report: Software supply chain security remains a low priority

Taylor Armerding
Nerd For Tech
Published in
7 min readMay 20, 2024

--

It’s been said many times, but it needs to keep being said: You can’t protect something you don’t know you have. And that cliché (because it’s true) about a chain being only as strong as its weakest link needs to keep being said as well.

Because both apply to the software supply chain, and usually not in a good way. Far too many organizations don’t have complete “visibility” into the software they’re using. As in, they don’t know what they have. And a defect or exploitable vulnerability in any link of that chain — and there are lots of them — means the whole chain is weak.

Those are among the expected but ominous findings in a recent report by the Ponemon Institute titled “The State of Software Supply Chain Security Risks.”

The report, sponsored by Synopsys, is based on a global survey of 1,278 information technology (IT) and IT security practitioners.

Among its results: Only 39% of respondents said their senior leadership is highly committed to reducing the risk of malware in their software supply chains. Only 45% said their organizations have a process for protecting against malicious open source software packages. And only 35% say their organizations produce Software Bills of Materials (SBOMs), which are the best way to get full visibility into the software supply chain.

So, predictably, the survey also found that 54% suffered a software supply chain attack over the past year, 50% took more than a month to respond to an attack, and 20% said their organization is not effective in detecting and responding to these attacks.

No wonder the software supply chain puts so many organizations at risk. We live in a world where software penetrates every element of professional and personal life. If you’re in business, you’re a software company, even if you don’t build it yourself. You use it for just about everything — finances, tracking inventory, human resources, and probably to help make the products you sell.

And ignoring its risks — as 61% of organizational leaders apparently do — opens a massive attack surface for cybercriminals, for several reasons.

First, if you don’t maintain SBOMs — an inventory of every component of the software you’re using — you won’t know about vulnerabilities in those components, including those that have been made public.

That inventory has to go far beyond the components on the “surface” of a software product. You’re also indirectly using anywhere from dozens to hundreds of components, called dependencies, that your product relies on to function correctly.

Tim Mackey, head of software supply chain risk strategy for the Synopsys Software Integrity Group, has frequently explained it using an example of a simple Slack application with an Instagram interface, noting that the app included eight “declared dependences” — software components that the app needs to run, and that the vendor has listed.

But among those eight is one that has 15 dependencies of its own. And one of those 15 has another 30. “So when you peel back the onion on this, [the app] actually has 133 separate components in it that go eight levels deep,” Mackey said. “The decision to use it means I now have 133 components that are part of my overall supply chain to power this application.”

Also, within those 133 dependencies were “multiple instances of code that had explicit end-of-life statements associated with them,” he said, which means they were no longer going to be maintained or updated and any new vulnerabilities wouldn’t be fixed unless new volunteers showed up to do so.

Those are crucial things to know. Not knowing them is equivalent, or worse, to a vehicle manufacturer not knowing every vendor in the supply chain that make the airbags it installs. It’s that kind of ignorance that cybercriminals are relying on, for good reason.

Second, even when organization are trying to do the right thing by obeying the exhortations of security experts to keep their software up-to-date, danger can lurk in those patches and updates. Hackers aren’t stupid — they know that software users have been told to “install a patch as soon as it’s available!” So what better way to breach targets than by secretly injecting malware into an update?

That’s exactly what happened three years ago, in the notorious SolarWinds/Orion attack. SolarWinds, which provides system management tools for network and infrastructure monitoring, has an IT performance monitoring system called Orion, and hackers were able to inject malware into an Orion update. So instead of having to hack into thousands of organizations individually, the attackers just compromised Orion and let supply chain connections take care of the rest, giving them access to the data and networks of SolarWinds customers, which included nine federal agencies.

Third, the software supply chain is increasingly complex. For decades it has included three types of code: proprietary, commercial, and open source. Within the past decade, open source components have become the large majority — more than 75% on average — of every codebase, for good reason. It is almost always free and can be modified to suit the needs of developers.

But it is not free of obligation — most open source components have licensing requirements — and its maintenance can range from excellent to spotty to nonexistent, meaning it can have vulnerabilities that will never be fixed.

Yet according to the Ponemon survey, only 48% of respondents said their organizations have a method for approving or forbidding the use of open source dependencies. No quality control, in other words.

And things are now getting even more complex with the increasing use of a fourth type of code — the software industry has joined the artificial intelligence (AI) gold rush with code produced by Generative Artificial Intelligence (GenAI). A majority (52%) of respondents to Ponemon said their development teams are leveraging GenAI to produce code.

This has obvious benefits — AI is more efficient, much faster, and never gets tired. But it also needs vetting and intense supervision, since it has been trained on code created by imperfect humans.

And once again, a large majority of the industry seems to be viewing the GenAI landscape with rose-colored glasses, thinking they can enjoy its benefits and ignore its risks. Only 32% of respondents to Ponemon said their organizations have processes in place to evaluate AI-generated code.

That’s asking for trouble, by offering an even more extensive attack surface for hackers, who are already flocking to exploit its weaknesses. The Open Web Application Security Project has already issued a Top 10 list of the most critical vulnerabilities found in the large language models (LLMs) used to train AI systems. A team of more than 125 expert contributors winnowed the list down to 10 from 43 “specific challenges posed by LLMs.”

The good news buried in all this bad news is that none of these problems is unsolvable. While software will never be perfect, and there is no magic tool or button that will make organizations bulletproof, there are well-established ways to become much more difficult targets. And most cyberattackers are looking for easy targets.

Most of it comes down to doing security basics. As Jamie Boote, senior consultant with the Synopsys Software Integrity Group, has put it in the past, “The big scary-sounding zero-day attacks — vulnerabilities with no readily available hotfix or patch — get all the media attention.”

“The reality is that the boring problem of unpatched vulnerabilities and legacy software silently running in critical infrastructure represents a much larger risk,” he said.

The Ponemon report lists several recommended ways to do those basics.

SBOMs are your friends: They should be mandatory. They can give you the visibility you need into your supply chain. They amount to a list of every “ingredient” in a software product. One of the best things about President Joe Biden’s 2021 Executive Order on Improving the Nation’s Cybersecurity is that it calls for every software product sold to a federal agency to come with an SBOM. The information required in those SBOMs includes

  • Supplier name: The name of the entity that creates, defines, and identifies components.
  • Component name: The designation assigned to a unit of software defined by the original supplier.
  • Version of the component: The identifier used by the supplier to specify a change in software from a previously identified version.
  • Other unique identifiers: Any other identifiers that are used to identify a component or serve as a lookup key for relevant databases.
  • Dependency relationship: Information characterizing the relationship between upstream component X that is included in software Y.
  • Author of SBOM data: The name of the entity that creates the SBOM data for this component.
  • Timestamp: The record of the date and time of the SBOM data assembly.

Of course, SBOMs aren’t magical tools either. They require inspection and maintenance. Ponemon recommends that organizations “compare supplied SBOMs to known malicious packages and malware, conduct a dynamic analysis of a running application and conduct a binary analysis of application dependencies.”

SBOMs also aren’t enough on their own. They’re essential, but not sufficient. Other security basics needed to secure the software supply chain include

Continuously monitor running applications for threats, and your supply chain for new vulnerabilities’ risk status and the severity of the risk. Security risks are constantly changing and evolving.

Detect, track, and manage open source dependencies in source code, files, containers, and artifacts. Managing dependencies requires understanding and adhering to the licenses associated with each component. Open source libraries can have vulnerabilities that, if not addressed promptly, may expose the entire project to potential threats. So keep them up-to-date.

Verify your AI. AI-generated code has significant benefits — increased developer productivity and automated decision-making — but it also brings security risks that require evaluation and assessment. To reap the benefits and mitigate the risks, organizations need processes to evaluate IP and security risk, and code quality. Evaluations should be automated because manual evaluations are insufficient and too labor intensive.

Yes, doing all this takes time and money. But it’s an investment that will reduce the risks of the parade of horrors that result from breaches — brand damage, ransom payments, compliance sanctions, and more — which can reach well into the millions. And money not spent is money saved.

--

--

Taylor Armerding
Nerd For Tech

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.