The Startup
Published in

The Startup

Patching Critical Infrastructure Is Hard. U.S. Leaders Should Worry About That

Photo by Matthew Henry on Unsplash

Keep your software up to date!

As any cybersecurity expert will tell you, that’s more than just a casual reminder. It’s an urgent exhortation, frequently delivered with a true-life scare story.

If you don’t install available patches and updates to fix known vulnerabilities in your software, you could become the next Equifax, the credit reporting giant breached in 2017 because it failed to install a patch in the web application framework Apache Struts — a patch that had been available for months. The breach compromised Social Security numbers and other personal data of 147 million customers.

So with that and other similar (if a bit less notorious) examples regularly in the headlines, how is it possible that patches available since 2019 for two collections of vulnerabilities still aren’t being used much at all?

Especially when those vulnerabilities, labeled URGENT/11 and CDPwn, could put an estimated 2 billion operational technology (OT) and Internet of Things (IoT) devices at risk.

Next to no patching

The cybersecurity firm Armis, which has been tracking those vulnerabilities since July 2019, reported last month that 97% of OT devices vulnerable to URGENT/11 and 80% of those affected by CDPwn have not been patched.

This in spite of recent warnings or alerts from federal agencies including the National Security Agency (NSA), the Cybersecurity & Infrastructure Security Agency, the FBI, and Health and Human Services between last July and October.

Ben Seri, vice president of research at Armis, wrote in a company blog post that the devices at risk “are not simply used in everyday businesses but are core to our healthcare, manufacturing, and energy industries.”

The warnings said the kinds of attacks enabled by the vulnerabilities are multiple and varied, as are the targets. They could include not simply exfiltrating critical information but also eavesdropping on voice and video data/calls and video feeds, and man-in-the-middle attacks.

They could be used to infect healthcare systems with ransomware, specifically the Ryuk, TrickBot, and Conti variants.

And they could be deployed against industrial control systems (ICS), and programmable logic controllers (PLC), which are typically used in production and manufacturing environments to monitor and control physical devices like motors, valves and pumps.

“Using one of the critical RCE (remote-code-execution) vulnerabilities from URGENT/11, we were able to exploit two of the most common PLCs — the Control Logix Ethernet module 1756-EN2TR from Rockwell Automation, and the Modicon M580 from Schneider Electric,” Seri wrote, adding that with the access researchers gained, “an attacker can alter code on the PLC and change incoming or outgoing messages — sending false or misleading data to the engineering workstation.”

The possible attacks could be as significant as the Stuxnet worm, Seri wrote, except they would be easier to execute since an attacker wouldn’t need physical access to a system that is connected to the Internet.

Stuxnet, reportedly delivered via a USB drive, was used in 2010 to destroy nearly 1,000 uranium enrichment centrifuges in Iran’s nuclear program. It targeted PLCs that automated electromechanical processes in those centrifuges for separating nuclear materials.

Today, “a bad actor would not need a USB, but now can leverage CDPwn to infiltrate the network, then use URGENT/11 to take over a device,” Seri wrote. “Having compromised the device, an attacker can cause damage while remaining hidden from the monitoring system.”

Seri added that while Stuxnet used zero-day vulnerabilities, that wouldn’t be necessary now either. “The NSA Top 25 list of vulnerabilities consists entirely of attacks against known vulnerabilities that may not have been patched or mitigated,” he wrote.

Patching made uneasy

So, given the catastrophic potential, why (again) is almost nobody patching these vulnerabilities?

There is actually a very good reason. Patching OT systems is nothing like tapping an icon on your smartphone to update an app, which might take a minute or less. It’s not even like doing a major operating system update on your computer — the kind that comes with warnings to back everything up first in case something goes wrong. That might take a few hours, and during that time, you wouldn’t be using it. And even if your system crashed, it wouldn’t shut off the lights or make the water stop running.

With OT systems, it’s both complicated and risky. “It’s harder when you have something that you’re trying to keep running,” said Jonathan Knudsen, senior security strategist with the Synopsys Software Integrity Group.

“Things that were working just fine might unexpectedly break when you update some piece of software. If you’re going to do it right, you have to have a staging area that is pretty much a copy of your production environment, and you first apply patches there and then do as much testing as possible to make sure everything still works. Only then would you consider updating your production environment.”

It can be even riskier with ICSs, which are crucial to the operation of critical infrastructure like water and sewer systems and the power grid. And in an ICS environment like a factory floor, “failures can have a safety aspect in addition to all the usual headaches of downtime and lost revenue,” Knudsen said.

Joe Weiss, a control systems cybersecurity expert at Applied Control Solutions, said most information technology security experts don’t understand control systems. “There are so many things that prevent you from patching in any reasonable time,” he said.

For starters, the major companies that make control systems create their own version of an operating system. “It’s not Windows, it’s ‘Honeywell Windows’ or ‘Siemens Windows’,” he said. “It’s tweaked specifically for their products. You can do that when you’re that big. But if you went to the web and tried to install an update or a patch, it would shut the system down. It’s not that it might happen. It has happened, many times” he said.

Second, many control systems are running critical operations 24/7 and don’t shut down for months or years. Nuclear systems can have a 24- or 36-month operating cycle, Weiss said. “They’re not going to shut it down for an update unless it has to go down for some other reason,” he said. “The standard patch just doesn’t apply here.”

Third, information technology (IT) security teams and engineering teams have different priorities. “Engineering teams are measured on reliability and safety, while the security team is measured by how many vulnerabilities they find — it’s a Mars/Venus thing,” Weiss said.

“So when somebody on the IT side says there’s a vulnerability that could let an attacker take control of the network, the people in engineering want to know what that means. Does it affect a critical pump, valve, or a relay? And most of the time, IT doesn’t know. And if they can’t explain it and it doesn’t affect a valve or a relay, why are you bothering me?”

No known attacks — yet

It’s also difficult to quantify the risk. The October NSA advisory warned that one of the CDPwn vulnerabilities was among those “known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks.”

But so far, there are no reported cases of successful attacks exploiting either group of vulnerabilities.

That doesn’t mean there is no risk, of course. Knudsen said it’s natural for the operators of systems to think “if it ain’t broke, don’t fix it.” But in software, the reality is that “if it ain’t broke, it will be soon,” he said.

And one of the trickiest things about vulnerabilities like these is that it’s not always possible to tell if a failure or problem with the operation of a system was due to error, an equipment failure or a cyber attack.

“We have effectively no cyber forensics below the IP (internet protocol) level,” Weiss said. “If a motor goes awry, we can tell what happened but can’t say if [a cyber attack] played a role.”

Much of this is because, as experts have been saying for the past decade or more, because these systems, designed and built to operate safely for as long as decades, were not originally designed to be connected to the internet. So cybersecurity wasn’t even an afterthought — it wasn’t a thought at all.

Armis’s Seri notes in his post that many of the devices at risk, whether they are OT, medical or part of the “general” IoT, “lack any means of installing cybersecurity software or agents.”

That, he wrote, “means you need to have agentless protection capable of discovering every device in the environment and detecting vulnerable code on devices. You should also be able to map connections from devices throughout your network and detect anomalies in behavior.”

Knudsen said the URGENT/11 and CDPwn vulnerabilities show the need for security to become just as important as reliability and safety in OT systems, and also highlights the complexity and difficulty of doing so.

“Including security in every phase of software development is one way to make better applications, which hopefully will mean fewer patches,” he said. “But updating or patching is inevitable, so applications should include mechanisms to make updating as easy as it can be.”

“Still, that won’t solve the problem of patches that unexpectedly cause failure somewhere else in your environment,” he said.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Taylor Armerding

Taylor Armerding

111 Followers

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.