Suckering the Artificial Chump

Jim Burrows
Personified Systems
4 min readAug 23, 2016
Phishing for AIs. Photo credits: Nao robot by Stephen Chin (CC BY 2.0), fishing lure by Kanan Lures (CC BY-SA 3.0)

For the last year or so, I’ve been writing about many of the ethical issues that arise in the area of Artificial Intelligence, and what I’ve been calling “personified systems”—systems which regardless of their level of intelligence interact with us more as if they were people than just mere tools. For an even longer time, people have been warning about the dangers of increasingly intelligent AIs. There is, however, a whole class of problems that arise from their lack of intelligence. Naive and unsophisticated actors make excellent “marks” for con-men, targets for phishing, and unwitting tools.

Any hacker, or former hacker, can tell you that one of the most reliable hacking techniques is “social engineering”—playing on the gullibility of the weakest security link in any system: the human beings. The computer-security firm RSA, for instance, was greatly embarrassed in 2011 when its own security was breached. As it turns out, the hackers used a “spear-phishing” attack to gain access. They targeted a number of RSA employees with bogus email containing an Excel spreadsheet claiming to be the “2011 Recruitment Plan”. One of the executive assistants fell for it and opened the malicious spreadsheet.

Prime targets for phishing and other social engineering tasks are those with substantial power or authority and limited understanding of the nature of the threats they face. And that is just what an awful lot of AI development is producing: powerful autonomous systems with narrowly focused expertise. It is easy to envisage how, as a hacker, one might trick one of these systems with specifically tailored phishing.

In some ways, this is just an escalation of the sort of “gaming the system” that many search engine optimization (SEO) specialists have engaged in as part of the “arms race” with Google, striving to manipulate their position in search results using quirks and loopholes. What’s different is the goal. Rather than getting the search engines to rate one page higher than others, hackers could be playing for larger stakes.

Last month the Rand corporation published an editorial, “The Ethics of Artificial Intelligence in Intelligence Agencies” to The National Interest’s blog. In it, Cortney Weinbaum laid out a few examples of the kind of stakes that could come into play. In order to explain the risks in the intelligence and national security arenas, she starts with an example from the realm of stock trading.

In this scheme, an autonomous system issues a great many apparently unrelated buy orders. Other systems see this and start actively buying and selling the stock which pushes up the price, the original system then sells at the higher price and cancels its buys, all in a second or so. She further cites SEC statistics showing that 95–97.5% of trade orders are cancelled to suggest that this is become a major mechanism.

From here, she went on to ways that foreign powers could use disinformation to mislead the AIs that are beginning to have important roles in the collecting and analyzing of foreign intelligence. She points out that the Department of Defense have a directive that requires a “human in the loop”, such that any weapons system has the capacity for humans to monitor and override. The intelligence community, at present, has no equivalent directive, but she asks what will happen if and when they do:

“One day the computer warns of an imminent attack, but the human analyst disagrees with the AI intelligence assessment. Does the CIA warn the president that an attack is about to occur? How is the human analyst’s assessment valued against the AI-generated intelligence?”

Or imagine that a highly sophisticated foreign country infiltrates the most sensitive U.S. intelligence systems, gains access to the algorithms and replaces the programming code with its own. The hacked AI system is no longer capable of providing accurate intelligence on that country.

Based on such questions she recommends, strongly, that the intelligence community proactively start to deal with the risks in this area, developing both policies and systems that build in the needed safeguards and oversight.

As I read the Rand piece, I couldn’t help thinking about a couple of items that have been in the news of late. First off, sparked by the coming election, there have been stories about the power and responsibility that the President, as Commander in Chief must exercise, resulting in him always being near the “nuclear football” and carrying the “biscuit” that unlocks it. The second there is one of the most horrific forms of social engineering hack that we see today: “Swatting”, the placing of hoax 911 calls intended to bring police SWAT teams into action against the victim. Juxtaposing these is a rather sobering experience.

Swatting has been called a form of terrorism, but so far as we have heard publicly, it has only been used by hackers pursuing personal vendettas, harassment, or internet and social media political agendas such as “Gamergate” and the like, and has not been used by state actors and terrorist organizations. One can readily imagine, however, terrorist organizations such as ISIS/ISIL/Daesh or al Qaeda, who regard both the US and Russia as enemies, adopting the technique, if they could find the mechanism. Convince the US that the Russians are attacking or vice versa, and “Rome” or the “Great Shaitan” in both its forms takes care of itself.

However, “swatting” is only one of the most extreme forms of social engineering in the hacking toolkit. People can be tricked into much more benign mistakes, resulting in a wide range of theft, fraud and identity theft. The important thing for us to realize is that as AIs, personified systems, and other autonomous agents become embedded in our social media, lives and society in general; as they are delegated more power and agency, they become potential victims for conmen and social engineers. Their intelligence may be very impressive in their specialized field, but it has very distinct limits, and as such, they can easily be unwittingly tricked.

--

--

Jim Burrows
Personified Systems

On the ‘net (the ARPAnet) in ’74. 4 decades career doing hi-tech things I never did before. Researched Machine Ethics. Retired to create novels and comic books.