Desperately seeking clarity in the 2024 cybersecurity crystal ball

Taylor Armerding
Nerd For Tech
Published in
8 min readDec 18, 2023

Predictions are tricky — noted futurist Yogi Berra said so. But that doesn’t mean they’re useless. You can’t really plan anything unless you make some informed guesses about what’s going to be happening weeks, months, or even years from now. One of our more lasting clichés — because it’s true — is that “failing to plan means planning to fail.”

In the world of technology — specifically cybersecurity — one enabler of success is forecasting. Those who correctly guess what’s around the next corner get a jump on the competition and on malicious hackers.

Some of it is not that tricky. For all of human history, technological advances have come with benefits and risks. So, I’ll go first: I guarantee that any significant advance will be exploited for both good and evil. You’re welcome.

Indeed, the most recent shiny new things — generative artificial intelligence (AI) and large language model (LLM) chatbots — which a year later are not so new or so shiny, are just the latest example. When the first iteration of ChatGPT went live at the end of November 2022, it generated a predictable flood of predictions, with one camp forecasting utopia and the other dystopia. Of course, they were both partially right, which means we’d better learn to use AI if only to give ourselves a chance to prevent it from using us.

Tricky or not, we should admire those willing to make forecasts. It takes courage to risk being wrong, but the good thing is that being either wrong or right can help us learn.

So, with 2024 looming, here are some courageous forecasters, making their best guesses about what the next 12 months are likely to hold. Thanks to those accepting the challenge.

AI good, AI bad…

Christopher Hadnagy, CEO and chief human hacker, Social-Engineer

We are seeing an increase in AI and LLMs being used in phishing, translating the phish to languages not previously focused on, like Japanese. The capability of AI seems to grow exponentially every couple of weeks. Although this is exciting for the world, it’s also enhancing things for threat actors. We found one site that guarantees an increase in success of your phishing campaigns, or you get your money back.

We’re also seeing AI and LLMs being used to change voices to sound more “American” and therefore more trustworthy for more effective vishing. Actors have figured out that unfamiliar accents are considered a potential threat, so an accent that sounds like the victim will make them trust more quickly.

It will get much harder to detect these, but I also predict we will see more tech coming from the protection side using AI to help detect and mitigate these attacks. We are already seeing research projects in the AI and defense space.

Jason Schmitt, general manager, Synopsys Software Integrity Group

As AI-generated coding sees rapid and widespread adoption, it will quickly be seen as a fourth source of software and become a central component of software supply chain management. The emerging broader supply chain risk management discipline with the help of software composition analysis tools will begin to focus on all four sources of software — custom code, open source, third party, and AI-generated code.

Eva Velasquez, president and CEO, Identity Theft Resource Center

The risk of AI-driven identity scams that impact large numbers of people will be overestimated, while the potential for targeted attacks on single or small groups of individuals will be underestimated. The greatest risk from generative AI will continue to be mis- and dis-information.

Thomas Richards, principal security consultant, Synopsys Software Integrity Group

With the ever-expanding availability of AI/ML LLMs, companies are under pressure to use them for both internal and external tools and products. Both scenarios will introduce new risks that didn’t exist six months ago, and there’s little guidance on how to deploy these systems securely. Based on trends we have seen with early adoption of mobile and cloud technologies, I expect there to be some major breaches and compromises during the infancy of this technology.

AI/ML model data poisoning and secret extraction will continue to rise in popularity as attack paths against these systems. A whole new class of attacks is now possible against this techno-social domain where humans can find ways to manipulate, or social engineer, a computer into performing actions it is programmed not to.

I expect this space to expand quickly as new tooling is made available to assess and provide safeguards around how the technology is used.

Theresa Payton, CEO, Fortalice Solutions

“Franken-frauds” and deepfake AI “persons” will enter the workforce The ability to create synthetic identities will become automated and run in real time. These “persons” will use AI and big data analytics to test themselves and ensure they look authentic.

Joey Stanford, vice president of data privacy & compliance, Platform.sh

AI security will not keep up with threats next year, in part because the broad use of AI is outpacing our collective ability to understand it and establish guardrails. Among many inherent problems with AI models is the ability for political agendas and other motives to influence output.

We’ll see AI becoming increasingly popular in cyberattacks next year because AI never sleeps — you just turn it on, and it runs and learns. This is one reason why AI will find vulnerabilities and new exploits very quickly. Also, the use of AI to create phishing emails, virtually indistinguishable from a real sender, will leave companies struggling to prevent breaches by spear phishing attacks in 2024. Realistic fake voicemails and videos will just add to the chaos. While governments are taking steps to regulate AI, no regulation will ever be able to contain it entirely because laws always embody cultural norms. What’s permissible in China may not be in the EU or the U.S.

Kelvin Lim, director, security engineering, Synopsys Software Integrity Group

Individuals with no prior coding experience can now use AI to generate code. There have been numerous reports that cybercriminals have already used AI to develop malware and other malicious applications. While AI code generators can help companies’ development teams improve their efficiency, the codes generated by AI may not be secure. But AppSec vendors have also used AI to detect license compliance issues, attacks, and vulnerabilities in applications with greater accuracy and speed.

Dennis Kengo Oka, applications engineer, Synopsys Software Integrity Group

There is a strong trend toward the application of generative AI in the automotive industry, starting with generating or writing code but also to other tasks such as reviewing or analyzing code for vulnerabilities, processing test results, and assisting developers on triaging and fixing vulnerabilities.

It is also imperative for the automotive industry to be aware of the cybersecurity risks in using AI solutions. For example, don’t assume that AI will generate flawless code. Traditional AppSec solutions will still be necessary to ensure the development of high-quality, safe, and secure code.

More weak links in the software supply chain

Paul Roberts, publisher, The Security Ledger

Supply chain attacks will become endemic. One clear trend in recent years is the shift by malicious actors from attacks on public-facing IT infrastructure to more subtle compromises of software supply chains.

The reason is simple: Most enterprise networks today are bristling with security monitoring and detection tools. But development environments and CI/CD pipelines? Not so much. A clever attacker who can compromise a common piece of open source or third-party commercial code — say by planting a back door in an otherwise functional, look-alike package on a major open source package manager — stands a good chance of getting access not just to one sensitive and protected IT environment, but to tens, hundreds, or even thousands of them.

The hack of SolarWinds Orion is the classic example of this, but a similar incident played out in 2023 with the hack of a desktop client application at Voice over IP vendor 3CX. That was a two-tier supply chain hack that was aimed at cryptocurrency vendors who used the 3CX software.
The federal Cybersecurity and Infrastructure Security Agency (CISA) has been promoting secure development practices and software Bills Of Materials (SBOMs). But CISA has no power to force the adoption of best practices, and “pretty please” only goes so far. With little or no change in the Wild West atmosphere that has characterized software development for the last half-century, and few consequences for firms that drop the ball in securing software supply chains, there is little to no pressure on software development organizations to change their ways and, thus little reason to expect any change in the hockey stick-like trend lines for software supply chain hacks in 2024.

Mary D’Angelo, dark web threat intelligence and threat actor advisor, Searchlight Cyber (opinions are hers and not those of her company)

One of the major supply chain attacks of 2023 was on MoveIt by Cl0p, a prolific ransomware group. The group exploited a vulnerability in MoveIt’s software that gave them access to thousands of organizations, including larger ones that generally have a pretty sophisticated security posture. It’s likely that other ransomware groups will follow in Cl0p’s steps because of its success.

Boris Cipot, senior security engineer, Synopsys Software Integrity Group

Software supply chain security will remain a priority for many organizations, which will increase the pressure on the software development industry. As a result, the focus on DevSecOps practices and QA automation will grow. We will also see an increase in software development processes being supported by AI. We have already seen a heavy sprouting of companies offering AppSec tools using generative AI and this will only continue. However, it is advisable to take such “advancements” bit by bit as it’s still very early in the evolution of generative AI. Be it in DevSecOps, QA testing, or just putting together an SBOM, the systems will be proving their worth as well as their faults in the year ahead.

Privacy primacy

Joey Stanford

Consumers are more concerned about privacy than ever before, so it’s important to remember that those who make B2B buying decisions are consumers too, and their thinking as consumers will bleed into their other role, meaning data privacy will become part of B2B criteria. There is a dawning realization that data privacy and trust are closely interlinked, and that trust helps businesses gain and retain customers.

Eva Velasquez

An unprecedented number of data breaches in 2023 by financially motivated and nation-state threat actors will drive new levels of identity crimes in 2024, especially impersonation and synthetic identity fraud. This will, in turn, drive more adoption of biometric-based identity verification (not recognition) tools to prove people are who they claim to be.

The emotional toll of identity crimes will continue to increase, and assistance providers will struggle to meet the emotional recovery needs of victims. Identity crime victimization is too often classified as not creating trauma requiring victim support, despite the fact that 16% of respondents to one of our surveys contemplated suicide as a result of their identity crime.

Ransomware rising — again

Theresa Payton

As organizations do a better job protecting against and recovering from ransomware incidents, malicious cyber actors will move to another ploy as cryptocurrency prices fall from their meteoric rise. In a disturbing twist in 2024, these cybercriminals will hack into intelligent buildings and lock them down with people inside, demanding a hostage payment to release individuals.

Mary D’Angelo

Ransomware activity has been increasing. One result is that ransomware groups have more buying power, which allows them to purchase more and better exploits, increase their recruitment strategies, and leverage AI for their attacks, ultimately making them more sophisticated and more successful. Reuters reported that a well-known ransomware group, Black Basta, raked in $100 million in Bitcoin this past year alone.

Also, the Cl0p ransomware group gained notoriety for targeting enterprise-managed file transfer solutions for extortion-only attacks. They were extremely successful in exfiltrating the data of many organizations in less than a week and chose not to deploy ransomware to their victims. Because of that, I can only imagine it will inspire other ransomware groups to choose extortion over ransomware.

--

--

Taylor Armerding
Nerd For Tech

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.