On-Device AI: A Cyberweapon in Every Pocket

Kevin O'Toole
AI: Purpose Driven Policy
5 min readJust now

PC and smartphone manufacturers are racing to put AI chips into their devices. Eager to seize advantage and scared of being left behind, their competitive urgency is what makes the tech industry so vibrant. Some innovations will be epic fails like the PC Plus’ Orwellian “Recall” feature while others will bring new creativity to applications like gaming and photography.

The on-device AI processing powering this innovation, however, should give both government and society pause.

One need only look at contemporary cybersecurity issues for a foreshadowing of what may lie ahead. Distributed Denial of Service (DDoS) attacks are the bane of companies, cloud providers and ISPs. In a DDoS attack, thousands or even millions of compromised devices begin bombarding a victim’s systems in an effort to overwhelm them through sheer volume of traffic.

The scale of DDoS attacks is shocking. Like well organized saboteurs, infectious “bot” programs reside on compromised computers and lie in wait until they are called upon for a coordinated attack. The largest known attack occurred in 2017 when a massive botnet managed to generate 2.5 Terabits per second (Tbps) of traffic targeting 180,000 servers.

Now consider a future where, literally, billions of devices have sophisticated AI chips embedded in them. These chips are capable of running AI-powered botnets that can coordinate their own attack planning and even re-write their own code to perpetuate an attack. It needn’t be a DDoS attack. The attack can be much more sophisticated. It can be run to achieve nefarious human ends or — and this seems a credible outcome — it can become autonomous and simply run amok. The cyber version of a bioweapon that mutates when released into the real world.

How powerful can this be? Qualcomm’s new Snapdragon 3 chip purports to support a 10B parameter AI model on the chip itself. This seems insignificant in a world where the leading AI models exceed 1 trillion parameters, but there is work under way to dramatically shrink model sizes. Commercial economics make the development of smaller, highly efficient models inevitable because they are much cheaper to operate.

Researchers are making striking progress in this regard. Google’s Gemma2 27B model is only 1.5% the size of Chat GPT4 but delivers very solid results. The even smaller 9B version performs quite admirably, delivering results far faster and more economically while sacrificing some quality.

So a highly capable LLM that can run on a handheld/desktop device already exists. This LLM, however, remains a general purpose AI that invests much of its capabilities in breadth rather than depth of expertise. An LLM trained to perpetuate cyber attacks does not need to be good at writing poetry or hit the 90th percentile on the bar exam. Much of the model can be trimmed (making it even smaller/faster) and those parameters re-invested to hone its cyber attack and code writing capabilities.

We must therefore consider a future where millions (or hundreds-of-millions) of AI-enabled devices have been quietly compromised and are ready to run a GPT4 quality model (or better) that has been optimized for cyber attacks.

The risk of an AI trojan is high. There is already Congressional angst about TikTok’s potential as a vehicle for the Chinese government to conduct various information and cyber operations. An application like TikTok — or a compromised version of another hyperscale app — can conceivably be used as a trojan horse to carry the initial payload of an AI cyber weapon. If an adversary can get the application loaded into hundreds of millions of devices and only then trigger a stub of code that downloads an AI cyber weapon, it could take command of immense AI resources with breathtaking speed. Shutting down such an AI weapon may be virtually impossible. Compromised data centers can be powered down but one cannot simply reboot 100 million smartphones simultaneously.

The tragic beauty of a botnet attack is that it requires few centralized resources. The whole point is to scavenge other people’s chips, software, network connectivity and electricity to conduct the attack. In an AI fueled attack, the AI may use these scavenged resources to evolve itself “in the wild”. Like biological evolution, mutations have a very low price of failure. The attacker does have to spend money to try a new approach, it just relies on the resources of the hosts it has compromised. It can evolve slowly and quietly, honing its approach while attracting little notice; or, it can evolve rapidly, aggressively trying new attack vectors to overcome evolving defenses. The bots may self-organize into specialized teams conducting different parts of the attack. Or determine to attack the very tools that would be used to remediate the problem. (CrowdStrike perhaps?)

By design or accident, a self-evolving AI cyber weapon may behave much like Covid. Covid was a relatively stable virus until vaccines arrived. Subscale viral variants were evolving globally but the mutations conferred no competitive advantage until vaccines began blocking the original alpha variant. The other Greek letters came pouring forth only after the vaccines arrived and the subscale variants fought — without central coordination — against the hardening immunity landscape.

An AI cyber weapon may behave much the same: quietly building mutations that are ready to scale as cyber defenses learn to fight back against the original version. It is conceivable that, like Covid, an AI cyber weapon perpetuating itself via on-device AI chips could become an endemic problem that never goes away but just continues to “mutate” in the wild.

As discussed in OpenAtom, AI chips are like the atomic centrifuges of the nuclear era — critical infrastructure necessary to scale capabilities. Western governments invest immense resources tracking the world’s nuclear centrifuges and work diligently to slow their availability to nefarious regimes. The US government is already at work hampering the distribution of the most highly capable AI chips. No doubt they are trying to keep count of advanced chips in adversarial hands to understand the risks posed by other nation’s AI capabilities.

Now both government and society must also consider the cumulative risk posed by on-device AI. On-device AI chips will put a small, but capable, AI centrifuge in every pocket and on every desk. If lashed together via botnets, these devices can power dangerous, and perhaps uncontrollable, AI cyberweapons.

AI cyberweapons that just sit quietly in 100M pockets.

Until they don’t.

(Image credit: Article logo created by Microsoft Copilot.)

--

--

Kevin O'Toole
AI: Purpose Driven Policy

I write about the need to develop national purpose and governance related to Artificial Intelligence.