Zero-day software defects are leading to many very bad days
First the bad news. An analysis from Google Mandiant documents that cybercriminals are getting better at finding so-called “zero-day” vulnerabilities in software products — those discovered by hackers or other third parties before the makers of the affected products know they exist, which obviously means no patches for them are available.
Then the worse news. Hackers are also getting much faster at exploiting those vulnerabilities before a patch is available. According to Mandiant, the average time to exploit (TTE) zero-days dropped from 63 days in 2018–19 to five days in 2023 — less than a sixth of what it was five years earlier.
And then even worse news. Mandiant reports that while its TTE data is “based on reliable observations,” its estimates are conservative because they rely on the first reported exploitation of a vulnerability, which doesn’t mean it was the first time it happened.
“Frequently, first exploitation dates are not publicly disclosed or are given vague timeframes… It is also likely that undiscovered exploitation has occurred. Therefore, actual times to exploit are almost certainly earlier than this data suggests,” according to Mandiant.
Other markers of the trend include
- “Of 138 vulnerabilities disclosed as actively exploited in 2023, 97 (70.3%) were leveraged as zero-days,” according to Mandiant.
- From 2020 until 2022, the ratio between n-days (fixed flaws) and zero-days (no fix available) remained relatively steady at 4:6, but in 2023, the ratio shifted to 3:7. According to Mandiant, this wasn’t because the number of n-days exploited in the wild declined, but because of an increase in zero-day exploitation and the improved ability of security experts to detect it.
- The number of organizations impacted by actively exploited flaws also increased in 2023 to a record 56, up from 44 in 2022 and 16% more than the previous record of 48 vendors in 2021.
- Even when a patch is available for a vulnerability, some organizations don’t get around to applying it — sometimes for months or even years. According to Mandiant, “patching prioritization is increasingly difficult as n-days are exploited more quickly and in a greater variety of products.”
There are some pockets of encouraging news amid this deluge of depressing data. It’s not just the bad guys that are getting better at finding zero-days. At the Pwn2Own conference in Ireland last week, ethical hackers collectively won more than $1 million for finding more than 70 zero-day vulnerabilities in what were thought to be fully patched devices.
The Mandiant report notes that the increased capability and aggressiveness of attackers in exploiting both n-day and zero-day vulnerabilities “is pushing defenders to provide efficient detection and response, as well as to adapt to events in real time.”
Mike Lyman, associate principal consultant with the software security company Black Duck (formerly the Synopsys Software Integrity Group) and a contributor to the company’s annual Build Security in Maturity Model (BSIMM) report, said that the report has documented notable improvement in building more-secure software and detection of vulnerabilities among more than 130 companies analyzed.
He said most participants in the BSIMM data pool are gathering attack intelligence more effectively, making it easier for people outside their organization to report vulnerabilities properly, and streamlining internal processes to get those reports to the right people more quickly.
Finding more, but more to find
It’s just that the so-called “attack surface” is expanding even more rapidly.
“We are getting better at finding and fixing things before the software is released, but we are just releasing so much more of it that there is more for hackers to look at,” he said. “Defect density is going down but there is so much more software out there today that the raw number of vulnerabilities being reported in the world hides that fact.”
Sammy Migues, principal at Imbricate Security and one of the primary co-authors of the BSIMM report since its beginning in 2009, agrees. “We’re still rushing headlong into everything being code,” he said. “As cars, bicycles, cameras, houses, buildings, IoT [Internet of Things], medical devices, and everything else becomes code, the attack surface grows by orders of magnitude.”
“As all these items become more interconnected and a monoculture in some ways, a bug that would’ve been fixed in due time in 2015 can now cause a cascade of failures that can affect economies, health, and safety.”
That explosion of “everything is code” points to another hurdle to better security: Modern software products are more assembled from existing commercial and open source software components than written from scratch, which means there is a vast software supply chain with vulnerabilities that can lead to the cascade of failures noted by Migues.
Indeed, a vulnerability in a common component can affect thousands or even millions of other software projects. Lyman pointed to the disastrous 2022 vulnerability in the Apache logging library Log4j. “Even if the component is not in such widespread use as Log4j, once the defect is known and researchers are aware of it, it’s not hard to look for the same thing in other software,” he said. “This is driving a huge amount of interest in supply chain risk management.”
And it’s not just that there is more code. Given the potential financial rewards of ransomware and the sale of stolen identity or intellectual property data on the dark web, the number of cybercriminals is expanding as well.
Cat and mouse
“There are thousands of top-notch exploit developers out there, some of whom are on the wrong side of the law,” Migues said. “Their exploits feed an order of magnitude more attackers who build tools, harnesses, and automation that fuel an even bigger set of attackers. I have no doubt that while there’s competition in the space, there’s also co-opetition, trading, and so on.”
Jamie Boote, senior consultant with Black Duck and another contributor to the BSIMM report, agrees that it comes down to both attackers and defenders getting better. “This has always been a cat-and-mouse situation,” he said. “As known exploits are shared, the cognitive friction associated with applying those techniques has also dropped, which will reduce time to exploit.”
Finally, while it hasn’t been measured yet in any definitive way, it is almost certain that cybercriminals are using artificial intelligence (AI) to find vulnerabilities and develop exploits.
“AI is a force enabler and a mentor,” said Boote. “It allows an expert or someone who is competent in their field to refine methods and tweak their approaches. It can also take out the drudgery of doing things like drafting the bulk of an automated exploit script or extrapolating on existing information available.”
So, with those multiple factors contributing to the zero-day problem, what can organizations do to mitigate it? Migues said in some cases it comes down to incentives — security won’t become a top priority until the cost of failing to do it becomes too high.
Security experts have advocated for more rigorous testing of software for decades, he noted, “but organizations feel it’s just not cost-effective at any scale. Doing that for one medical device where the expectation is that it will take several years to get the product to market, and the business is capitalized to live through it is one thing, but doing that for things like a thermostat, house, hotrod car, and so on and on and on is too much.”
Tolerance for shrinkage
Migues said the cost of security vulnerabilities tend to fall into the “shrinkage” category. “Every business has shrinkage, but we think of it mostly as a retail thing. People steal, stuff breaks, rodents eat, nature demolishes, and we have shrinkage. There’s a risk tolerance that every company and industry is willing to absorb. Until shrinkage of revenue from software defects broadly hits unacceptable levels, there won’t be a lot of change.”
Lyman agrees. “BSIMM is seeing software security training on the decline,” he said. “It is expensive in both time and money. We are losing the lessons learned, so common vulnerabilities are likely going to start to slip back into our software.”
Adam Brown, associate managing consultant with Black Duck, is seeing the same thing. He said organizations are using automated software testing tools that can help find vulnerabilities while code is being written (static application security testing), while it is running (dynamic application security testing), and while it is interacting with users (interactive application security testing). But they are spending less time and effort on weaknesses in code that “researchers on the dark side are looking for to build and chain exploits.”
“It’s not that organizations need to do more, better, harder testing,” he said, “it’s that they need to be smarter about how they approach the search for security issues, which ultimately lie in weaknesses upon which vulnerabilities can be built and exploited.”
Security mandates?
One possible way to reverse that trend is for government regulation to force changes to security practices — something Boote said is already under way. In a regulation that took effect in March, the federal Cybersecurity and Infrastructure Security Agency requires any software producers that want to sell to the government to submit a Secure Software Development Attestation Form, guaranteeing that their products meet specific secure software development requirements.
And the impending European Cyber Resilience Act (CRA), which won’t take full effect for three years, sets cybersecurity requirements for hardware and software products placed on the EU market. “Manufactures are now obliged to take security seriously throughout a product’s life cycle,” according to the CRA website.
But Mandiant doesn’t foresee the cat-and-mouse game subsiding. “If zero-day exploitation continues to outnumber n-day exploitation while n-day exploitation continues to occur more quickly following disclosure, we could expect the average TTE to fall further in the future,” its report predicted. “Additionally, because zero-day discovery is more difficult, there is room for growing numbers of exploited vulnerabilities over time as detection tools continue improving and become more widespread.
Boote said there is still reason for hope because security fundamentals haven’t changed. While there will never be a silver bullet that guarantees 100% security, there are multiple ways to become a difficult target in a world where attackers are still looking for easy targets.
It comes down to being informed and being prepared. “The best way to deal with zero-days is to try to have many layers of protection for a defense-in-depth strategy coupled with an active threat-intelligence-gathering function that gathers the latest vulnerabilities and converts them into actionable guidance for IT and development,” he said.