How to weaponize AI today

Ben Mann
HackerNoon.com
5 min readJun 27, 2017

--

Toys like Prisma and FaceApp (demo photo above) show that cutting edge AI research can be transformed into production systems in a matter of months. These apps are fun to use, so they make AI seem gimmicky. In this post I’ll give an overview of some recent AI research and describe the potential dangers when it is used in production systems. It’s not clear how we can effectively mitigate social and political chaos. I want you to be aware of what’s possible and start thinking about solutions.

Impersonate anyone doing anything

In the 2016 US election, fake news may have influenced voters. It was only text. In a few years, people will cheaply and believably synthesize fake videos of politicians saying and possibly doing arbitrary things.

The following video from June 2016 demos real-time manipulation of videos so that any actor, politician, etc can be made to look like they’re saying what an attacker wants.

While that’s scary, with a few more years of refinement, it’ll be hard or impossible to distinguish a fake from the real thing.

When combined similar technology for transforming voices, like lyrebird.ai, we will soon have extremely convincing fake news.

Perhaps Americans will quickly learn to check their sources, or trusted organizations will be careful about what they post. What about places where education is worse? Afghanistan’s literacy rate is just 38%. Imagine the effects of a hijacked, synthesized broadcast on a local TV station of an American politician clamoring for the extermination of their country? What kind of world would that create?

Manipulate financial markets

Hedge funds already use neural nets to power their trading platforms. If we could manipulate these automated systems to do whatever we want, we could get very rich by causing them to move the market in a predictable way, or we could intentionally cause a flash crash like the one in 2010.

An example from AI image classification systems shows us how this might be done. Consider this example from Explaining and Harnessing Adversarial Examples:

By adding a tiny bit of what looks like noise to the picture of the panda, the classifier confidently thinks that it’s a actually a gibbon (a kind of monkey). To a human observer the image looks unchanged. The attack is invisible.

To attack the classifier in this way, the attacker doesn’t even need access to it since he can train a model of his own to do the same task and then learn how to attack his substitute.

To apply this to finance, an attacker could train an investor bot on historical market data, then to attack it he would simulate making targeted trades and see that his investor bot is forced to take the actions he wants, like buying or selling a lot of a particular stock. From there, he would make real trades on the real market and hope the same attacks work on investor bots he doesn’t control. While this may seem difficult to pull off, there’s a strong monetary incentive. Where there’s money, there are people willing to put in the work to make it happen.

A similar exploit could manipulate the behavior of self-driving cars. For example, confusing images in the target car’s field of view could cause it to “deliberately” crash. This could be as simple as someone pasting a special decal on a stop sign. The decal might look like nothing to a human, just like the image of the panda above looks unmodified, but to an AI system it would compel it to take some special action.

Undetectable malware

Malware detectors keep us safe from our computers getting hijacked, except in cases like 2017’s WannaCry attack. But what if the malware could always mutate to avoid detection while still performing its function? How many more WannaCry’s would there be?

Recent research showed neural nets mutating malware such that machine learning-based malware detectors would always fail. Hand-tuned systems may still work, but as attacks grow more diverse and easier to generate, we may be unable to keep up.

Leaking private information

Even without access to our computers, it may be possible for an attacker to gain access to your private records stored in the cloud. Personalization systems, like Amazon predicting your next purchase, are built on modern AI eating up your cloud data. In particular they support differential privacy which ensures individual users’ data remains private while still allowing the system to generalize usefully. In a paper published last year, researchers demonstrated a perfect AI attack against a standard differential privacy technique. While there are other ways to preserve privacy, it will be a constant arms race to keep private data private.

Many of these dangers come not from the intelligence of AI but from its vulnerabilities, and not from any innate destructive tendency but from its use by humans.

Sentient, malevolent robots are not the problem

The above examples give a taste of how AI can be used maliciously today. I’m sure people will think of many more uses as our techniques become more powerful. The machines don’t need to be sentient, they don’t have to be physical robots, and they don’t have to hate us. Think of AI like nuclear engineering: it is a powerful tool that can be used for good or bad.

Some researchers are already working on defense mechanisms, but they may not be enough. They need our help. New techniques will create new vulnerabilities. Creating awareness of the vulnerabilities and applicable defense techniques is also a hard and important task.

We need more people working on ways to mitigate the damage, both technical and nontechnical. Send me a message to let me know what you think, help brainstorm, or check out this article for how you can help.

--

--

Ben Mann
HackerNoon.com

Software engineer, tinkerer, aspiring mad scientist