Want to protect yourself from AI attacks? The feds have lots of advice on that

Taylor Armerding
Nerd For Tech
Published in
7 min readMay 28, 2024

All cybercrime is bad, but not all cyberattacks are equal. With apologies to George Orwell, some are more equal than others.

Yes, if a person’s identity is stolen, if a bank account is emptied, if the intellectual property of a business is stolen, those can be catastrophic events.

But not nearly as catastrophic as an attack that takes down the power grid, poisons an entire city’s water supply, makes traffic control systems go haywire, makes it impossible for healthcare providers to function, or any of dozens of other possibilities. Those are mass casualty events because they damage or undermine what we all rely on — critical infrastructure (CI).

So it makes sense that federal agencies responding to President Joe Biden’s “Executive Order [EO] on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence [AI]” would focus on the protection of the nation’s CI from the risks of the malicious use of AI. There are 16 CI sectors, including finance, transport, water supply, dams, healthcare, agriculture, computers and information, and energy.

And a few weeks ago, marking six months since the EO was issued, the Department of Homeland Security (DHS) rolled out multiple press releases on “new resources to address threats posed by AI: (1) guidelines to mitigate AI risks to critical infrastructure and (2) a report on AI misuse in the development and production of chemical, biological, radiological, and nuclear.”

There is no doubt about the need for better security from AI-assisted attacks.

  • In a session titled “State of the Hack” at the recent RSA conference in San Francisco, the current NSA Cybersecurity Director David Luber, along with former director Rob Joyce, warned that hostile nation states are leveraging AI to improve the sophistication of their phishing attacks, and that their primary goal is disrupting the operation of U.S. critical infrastructure rather than stealing data. In a post on the ReversingLabs blog, Paul Roberts wrote that Joyce said those attackers gain entry to the operational technology (OT) systems and then go quiet to remain undetected and “disrupt business processes at a time of their choosing.”
  • On May 1, the NSA and six other federal agencies, along with others in the U.K. and Canada, issued an urgent warning about threats from pro-Russian hacktivists against critical infrastructure OT systems.
  • This past week the Environmental Protection Agency (EPA) issued an enforcement alert after inspections it conducted since September 2023 found that more than 70% of water systems don’t fully comply with the Safe Drinking Water Act. It reported that some systems have critical cyber vulnerabilities, including the failure to implement security basics — things like using default passwords and authentication systems that can be easily compromised. “The EPA is issuing this alert because threats to, and attacks on, the nation’s water system have increased in frequency and severity to a point where additional action is critical,” the press release stated.

There are more, but you get the idea. They all raise the obvious question: Will the current AI initiative lower risks like these to our critical infrastructure?

As the saying goes, it remains to be seen. In some ways, it would seem so. DHS Secretary Alejandro N. Mayorkas, as part of the announcement, declared that “AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyberattacks. Our department is taking steps to identify and mitigate those threats.”

There has also been some encouraging response in the private sector. The Financial Times reported this past week that ahead of the opening of a global AI summit in Seoul, the U.K. and South Korean governments announced that top leaders in the AI industry had agreed to a new round of voluntary commitments on AI safety.

The companies, which include Amazon, Google, Meta, Microsoft, OpenAI, Elon Musk’s xAI, and Chinese developer Zhipu AI, agreed to publish frameworks outlining how they will measure the risks of their “frontier” AI models. They also committed “not to develop or deploy a model at all” if severe risks could not be mitigated.

A wave of bureaucracy

But then, there is also evidence that Steven Levy, editor-at-large at Wired magazine was onto something when he criticized the EO at the time it was issued six months ago. “How does the president intend to encourage the benefits of AI while taming its dark side?” he wrote. “By unleashing a human wave of bureaucracy. The document wantonly calls for the creation of new committees, working groups, boards, and task forces. There’s also a consistent call to add AI oversight to the tasks of current civil servants and political appointees.”

Indeed, the accomplishments Mayorkas lists for his department since the issuance of the “landmark” EO include that his agency had “established a new AI Corps, developed AI pilot programs across the department, unveiled an AI roadmap detailing DHS’s current use of AI and its plans for the future, and much more.”

That “much more” includes creating an “Artificial Intelligence Safety and Security Board to advise DHS, the critical infrastructure community, private sector stakeholders, and the broader public on the safe and secure development and deployment of AI in our nation’s critical infrastructure.”

According to Mayorkas, “This diverse range of leaders on the board will provide recommendations to help critical infrastructure stakeholders more responsibly leverage AI and protect against its dangers.”

A wave of bureaucracy indeed. And it’s worth noting that the statements feature words like “advise” and “recommend,” not words like “mandate” or “require.”

That is perhaps one reason why the announcement of the various AI giants agreeing to “voluntary commitments on AI safety” was greeted with some skepticism. Among the comments on the Financial Times story was this from a reader identifying as “rcduke,” “Companies will do whatever it takes to make numbers go up, and they’ll say whatever needs to be said to pacify governments. But unless there is actual legislation signed into law, there’s nothing that will make these companies do anything that benefits the public.”

Rules with teeth

That doesn’t mean advice and/or recommendations are useless, or that a presidential EO has no teeth. While the president can’t legislate on his own, he can order federal agencies to develop rules that might as well be laws.

One example is Biden’s May 2021 EO on cybersecurity, which calls for an eventual ban federal agencies buying any software products that don’t come with a Software Bill of Materials — a list of every software component in that product. That means any company that wants to sell to the feds has to comply. It’s an indirect mandate.

The DHS guidelines on AI, which were developed by its Cybersecurity and Infrastructure Security Agency (CISA), break down the AI risks to CI into three categories

Attacks using AI to enhance, plan, or scale physical attacks on, or cyber compromises of, critical infrastructure

Attacks targeting AI systems supporting critical infrastructure

Failures in AI design and implementation, which include deficiencies or inadequacies in the planning, structure, implementation, or execution of an AI tool or system leading to malfunctions or other unintended consequences in critical infrastructure operations

The department then recommends a four-part mitigation strategy that, according to its press release, builds on the National Institute of Standards and Technology’s AI Risk Management Framework. Those mitigations include

  • Govern: Establish an organizational culture of AI risk management. Prioritize and take ownership of safety and security outcomes, embrace radical transparency, and build organizational structures that make security a top business priority.
  • Map: Understand your individual AI use context and risk profile, which will help evaluate and mitigate AI risks.
  • Measure: Develop systems to assess, analyze, and track AI risks, including repeatable methods and metrics for measuring and monitoring them.
  • Manage: Implement and maintain identified risk management controls to maximize the benefits of AI systems while decreasing the likelihood of harmful safety and security impacts.

Those categories are broken down into much more detail, including 20 “general mitigation strategies” that include data inventory, data backup, endpoint security, incident response, employee vetting, information sharing, internal reviews, and vulnerability management.

Do the basics

Most of them sound a lot like the security basics that experts have been preaching for decades. At the RSA session, Roberts summarized the exhortations from Joyce and Luber as “…invest in stronger identity management and authentication technologies to shore up the security of employee accounts. Prepare security teams for increasingly sophisticated phishing attacks that leverage the use of artificial intelligence. Finally, dig deep into log files to look for patterns of activity that can’t be accounted for, including access attempts from low-value edge devices like residential routers, which are being compromised by state actors and used as part of large botnets that support targeted attacks.”

Those guidelines are good, but they illustrate that mitigating the risks of AI doesn’t require reinventing the wheel. And their effectiveness will depend on how well they are followed. The reality, at least so far, is that guidelines are recommendations, not mandates. But if organizations — especially those operating critical infrastructure — want to avoid AI-enabled disaster, they should treat them as mandates.

--

--

Taylor Armerding
Nerd For Tech

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.