AI tools are becoming hackers’ not-so-little helpers

Taylor Armerding
Nerd For Tech
Published in
6 min readApr 22, 2024

Global efforts to control the malicious use of artificial intelligence (AI) look like they’re late to the party.

Yes, those efforts are underway on multiple fronts.

  • Last month the U.N. General Assembly unanimously adopted a resolution encouraging countries to “safeguard human rights, protect personal data, and monitor AI for risks.”
  • The UK and the U.S. just signed a bilateral memorandum of understanding “to work seamlessly with each other, partnering on research, safety evaluations, and guidance for AI safety.”
  • A White House rule stemming from an executive order from President Biden is going to require that AI tools in the U.S. don’t endanger the rights and safety of Americans through things like biased results.
  • Security Week reported that more than 400 bills aimed at controlling different perceived threats of AI are on the table in U.S. state legislatures this year.

But the U.N. resolution isn’t a law — it’s more of a plea. The White House rule doesn’t take effect until Dec. 1. And most of the bills pending in statehouses have generated intense debate. “Every bill we run is going to end the world as we know it. That’s a common thread you hear when you run policies,” Colorado’s Democratic Senate Majority Leader Robert Rodriguez said.

Meanwhile, in the world of software, as numerous tech outlets have been reporting, cybercriminals are long out of the gate with predictable and ingenious uses of AI to assist them in everything from social engineering attacks to embedding malware into the large language models (LLM) used to train AI tools.

Because, as has been true since the beginning of humanity, criminals pay little attention to resolutions, pleas, rules, or laws. They put their energy and skill into committing their crimes and trying to avoid getting caught.

According to a report from threat intelligence analysts and researchers at cybersecurity firm Recorded Future, criminal hackers are increasingly using AI to create deepfakes, clone websites, and alter source code to help malware evade detection by so-called YARA rules. All in blatant violation of various resolutions and rules, of course.

YARA, which stands for Yet Another Recursive Acronym, “is an open source pattern-matching Swiss army knife that helps in detecting and classifying malicious software,” according to Netenrich, which adds that “YARA rules are essentially a set of instructions that define the characteristics of a specific type of malware or threat.”

But according to the Recorded Future researchers, criminals are using AI to help them modify source code that allows malware to “fly under the radar” of those YARA rules.

“It’s been known for months now that it’s possible to zip an entire code repository, send it off to GPT, then GPT will unzip that repo and analyze the code,” a Recorded Future analyst told the Hacker News. “From there, you can prompt GPT into altering portions of that code and sending it back to you.”

Not that this is a brand-new trick. Boris Cipot, senior security engineer with the Synopsys Software Integrity Group, said it is “a well-known principle of stealth, or so-called polymorphic malware, that usually tries to avoid signature-based anti-malware systems.”

Velocity matters

But as is the case with everything it does, AI can do it much faster than people or earlier technologies. “It uses the signatures described in the YARA ruleset to build malware with a signature that does not match any of the listed signatures,” he said. “AI can do this much faster than a human.”

None of this should be a surprise. As has been repeated endlessly through endless discussions for the past 18 months about AI’s benefits and risks, it isn’t intrinsically good or bad. It’s a tool that can and is being used for good and evil depending on user intent.

It’s also powerful. And it’s evolving faster than those seeking to control it, or even evaluate it, can keep up.

Aidan Gomez, founder and chief executive of AI startup Cohere, told the Financial Times that traditional evaluation criteria commonly used to gauge performance, accuracy, and safety of technology tools can’t keep up with AI. “A public [evaluation] benchmark has a lifespan. It’s useful until people have optimized [their models] to it or gamed it,” he said. “That used to take a couple of years; now it’s a couple of months.”

Gomez said the billions of dollars being invested in the technology by multiple tech giants means that new AI systems routinely emerge that can “completely ace” existing benchmarks. “As models get better, the capabilities make these evaluations obsolete,” he said.

All of which is why if the good guys — governments and all the legitimate organizations threatened by the malicious use of AI — are going to have any hope of leveling the playing field, it’s going to take more than resolutions and rules. It’s going to take, yes, more AI. One of the slogans in the software security industry is to “think like an attacker.” That means using the same (or better) tools as an attacker.

AI vs. AI

AI tools are needed because humans aren’t as good at spotting fraud, which is becoming much more sophisticated very quickly. Cipot notes that even in the recent past, most people were able to figure out if an image had been Photoshopped. But more recently, even a trained forensic investigator had trouble spotting any telltale signs in an AI-generated influencer model that had gained thousands of followers almost instantly.

“He had problems identifying it, but the clue was that in one image the AI-generated model had her ears pierced and in another from the same photoshoot she didn’t,” he said. “A pierced ear — tell me how many Instagram users look at the pierced ears of the models of influencers?”

The point, he said, is that “if you want to catch AI you will need AI,” especially with the improvements that come with every iteration of the technology.

The same is true in the case of criminals embedding evasive malware in LLMs.

Jamie Boote, senior consultant with the Synopsys Software Integrity Group, said the use of AI to generate malware that won’t be detected is more evolution than revolution. “Antivirus (AV) software has relied on a database of signatures to find malicious software for decades, and as soon as attackers figured out that static signatures were in use, they found ways to obfuscate their malware, or even create malware that rebuilt itself from scratch to evade signature-based detection.”

That, he said, has prompted the makers of AV software to supplement signature-based methods with behavioral ones. “While AI may help change the behavior of the malware to evade those detections, it’s likely that this is just another iterative step in the cat-and-mouse game that’s been played for decades,” he said.

Cipot agrees. If criminals are uploading code repositories for their AI tool to analyze, find defects, and then write malware that exploits those defects, “then AI, is or should also be used by AppSec [application security] tools to provide solutions to close those exploitation risks,” he said.

In short, there is reason for hope. Boote said the history of technology is that good guys can evolve along with bad guys. “Today, mobile devices are far more secure than they were previously because the industry has had time to get a set of industry best practices in place that accounted for as many edge cases as is practical,” he said.

So, as AI and machine learning matures, “expect to see best practices emerge about how to safely use it, what risky applications are, and how to adequately secure them against negative outcomes.”

--

--

Taylor Armerding
Nerd For Tech

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.