With AI hacks looming, don’t ignore security basics

Taylor Armerding
Nerd For Tech
Published in
6 min readMar 18, 2024

It’s no surprise — or shouldn’t be — that artificial intelligence (AI) is being used for devious, misleading, and criminal purposes. Every tool ever created or discovered, from fire to the wheel, the hammer, guns, aircraft — the list could go on and on — has been used for both good and evil.

But there is a difference in the effect a tool can have depending on its reach and capabilities. With a hammer, you can only pound one nail, or person, at a time. With a machine gun, a single person can take out a herd of animals or dozens to hundreds of other people in minutes. With AI, it’s possible to attack thousands, millions, or even billions worldwide, in seconds.

Boris Cipot, senior security engineer with the Synopsys Software Integrity Group, noted that “the internet, with its idea of shared information, was created for good but is misused for bad. Cryptography, made for protecting private information, is misused by criminals and terrorists to send indecipherable messages. Dynamite, which should make mining easier, in the end was misused in such a way that Alfred Nobel created a peace prize to feel better about his invention.”

All of which is why both utopian and dystopian forecasts about AI should be taken seriously. It’s also why those whose hope and goal is to make AI an asset to humanity need to focus on preventing or at least inhibiting those whose goal is the opposite.

Because those bent on the malicious use of it are off and running — fast.

It’s not just college kids using chatbots to write research papers or those who use AI to create deepfake images or videos, although those are malicious enough. It’s that criminal hackers are enlisting AI tools to increase the threat and damage from their attacks exponentially.

Late last month Cornell University published a paper by a team of researchers titled “LLM Agents Can Autonomously Hack Websites.”

Large language models (LLMs) are a type of AI algorithm that use massive datasets to understand, summarize, generate, and predict new content. And according to the authors of the paper, “LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. Importantly, the agent does not need to know the vulnerability [that allows an exploit] beforehand.”

Existential questions

The researchers’ somewhat low-key conclusion is that their findings “raise questions about the widespread deployment of LLMs.”

Indeed — potentially existential questions for an online world that now not only runs our communications and entertainment but also our critical infrastructure — traffic control, electricity, water distribution, and everything else.

A comment on a blog post about the paper by cryptographer and self-described “public interest technologist” Bruce Schneier, was much more direct.

The commenter, identified as “Bob,” declared that “it is inevitable that AI will be used to carry out attacks that change by the nanosecond, and that’s going to be happening sooner than later.”

“We currently find ourselves in the early stages of a brand-new arms race,” he added. “The genie’s not going back in the bottle. It’s a matter of time until there’s self-replicating rogue AI distributed across various pwned [compromised] servers, PCs, routers, and refrigerators.”

The authors of the Cornell paper present some statistics on that, documenting that LLMs like OpenAI’s ChatGPT are rapidly improving in hacking capabilities. “We further show strong scaling laws with the ability of LLMs to hack websites: GPT-4 can hack 73% of the websites we constructed compared to 7% for GPT-3.5, and 0% for all open source models,” they wrote, adding that “the cost of these LLM agent hacks is also likely substantially lower than the cost of a cybersecurity analyst.”

In other words, as “Bob” predicts, LLMs are getting much better very quickly. Cipot cited the “exponential growth paradigm,” noting that the Human Genome Project had only decoded 1% of the genetic code more than seven years into what was supposed to be a 15-year project. But since the sequenced data was doubled every year, the project decoded the remaining 99% in the next 7.5 years.

“The same is happening with AI today,” he said. “On the maturity level it is still at the stage of a hormonal teenager — capable, with a lot of muscle but still not experienced enough to make mature decisions. But it has already developed massively. Even if many see today’s AI applications as not much more than an advanced search engine, they will further develop, and not in small steps.”

AI future is now

Cipot also said the building blocks are already in place for an AI tool to “issue attacks based on its own testing of the environment and applying the knowledge it has from the internet. It can use the normal trial and error, and in the sense of thinking of a new attack vector it will need less time to research than a human.”

To skeptics, he noted that “before 2003 it was unimaginable that a virus would be able to spread around the world and infect critical IT infrastructure. But then in that year the SQL Slammer worm came and spread around the world in minutes. It impacted datacenters and internet service providers, and due to this it impacted internet services, websites, and ATMs.”

Still, other experts don’t think the end of the world as we know it is right around the corner. They contend that AI will need human supervision for a long time.

A comment on a Boston Globe story about AI being used to write computer code noted that “so much of software engineering is ALREADY highly dependent on automation of tasks that used to be done by humans. But with all that automation comes much more complex software that is harder to understand and debug, so the experience of skilled engineers becomes even more important.”

Jamie Boote, senior consultant with the Synopsys Software Integrity Group, agrees in part, noting that “automated attacks have been around forever. The lowest form of hacker is the ‘Script Kiddie’ — kids with scanning and exploit scripts. They also don’t need to know whether a system is vulnerable before adding it to the list of sites to probe — they just need to know it exists.”

AI is changing the game, he said, but it’s “more of an iterative step than a huge revolution. In my experience with ChatGPT 4, it’s like a recent college grad — it’s read the textbooks, it’s familiar with the topics, and it’s played around with basic exercises, but it has a way to go before it can go toe-to-toe even with our less experienced security consultants who have a year or two of real-world experience.”

Blazing fast college grads

It’s just that those digital “recent college grads” can operate faster than humans can even think. So Boote has some recommendations, starting with taking care of security basics. “If something can get probed, it will get probed with a more sophisticated automated agent,” he said. “The bar has risen, so if your software has vulnerabilities that a recent college computer science grad could ID, ChatGPT-enabled attacks will probably be able to find them. So get those handled.”

Boote goes into more detail in a recent post on his Secure Humans blog, where he wrote that most organizations “don’t have a pet AI model that you’ve built from scratch and trained yourself. This means that all your AI needs will be met by ‘Somebody Else’s Software’ or hosted on ‘Somebody Else’s Computer.’”

There are efforts among the tech giants that are creating all that software to curb the malicious use of AI. Hacker News recently reported on Microsoft and OpenAI shutting down the assets and accounts of hostile nation-states that are using their AI services for malicious purposes.

But given how many other ways there are to access AI tools, that sounds a bit like trying to stomp out a forest fire with your boots.

That’s Cipot’s point when he notes that “OpenAI and Microsoft are not the only ones that have this technology. Next to them there is Google with TensorFlow for example. But even if all the commercial companies close the doors to so-called threat actors — what about open source software AI projects? So no, this is not a good way to deal with this.”

But Boote says even in a world of exploding AI capability, doing security basics will help a lot more than trying to find some cutting-edge new defense.

He invokes the iceberg image to warn about focusing too much on AI risks. “It’s misdirection to focus on the 10% shiny stuff — the new AI risk — if you haven’t secured the 90% of the total risk below the surface,” he said. “That below-the-water stuff is traditional software development life cycle security that if you’re not doing already, you’d better get your ducks in a row.”

“AI presents new risk, new attack vectors, new attack surfaces, and new threats to establishing trust, but all of the old stuff is there too,” he said.

--

--

Taylor Armerding
Nerd For Tech

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.