Will the master manipulators of the future even be human? We can certainly be devious; we lie, we manipulate, we deceive. Humans have done this to each other from the beginning, so there is no doubt that we have probably become pretty good at it. Though, as we stare down the future of Artificial Intelligence, one has to wonder if computers will become just as accomplished at deception.
A large aspect of Cyber-Security revolves around deception. A hacker may convince you to click on a link that is supposed to show you a dancing dog, and really ends up giving your computer a virus. Malware may deceive the virus scanner by looking like a legitimate program. A worm may disguise its activity so that security specialists don’t notice anomalies. On the other side of the coin; cyber-security specialists may create honey-pot networks that are supposed to attract hackers away from the real targets, allowing them to monitor, and track the would-be attacker. To make these deceptions work, not only must hackers and specialists understand the technology, but they also need to understand human behavior. There may come a time when our monopoly on understanding human psychology and behavior is lost to powerful AI that know us better than we know ourselves.
We have seen examples of automated bots convincing thousands of people that they were actually talking to other humans. It was reported that the site Ashley Madison used bots, in the absence of real females, to interact with their customers, and convince them they were talking to a potential love interest. As these AIs become better, they will likely proliferate, and be used in nefarious ways to manipulate large amounts of people. It is easy to imagine a bot that gains your trust, collects information, gains access to your secrets, and uses this to someone else’s advantage.
Recently AlphaGo, an AI creation of Google DeepMind, was able to best champion Lee Sedol in a five-game match of Go. Through a combination of different algorithmic techniques, including machine learning, and neural networks, the AI was able to beat a human at a game that has more potential moves than there are stars in the universe. To beat Lee Sedol, AlphaGo had to develop some understanding of deception. The machine had to know that some moves would be more successful because they hid the intentions of the strategy. This is potentially an instructive example, as AI could learn from past failures, and run millions of simulations to model human behavior as people encounter its bots.
As mentioned, AI can also be used by the good guys. We are likely to see an arms race, where battles will be waged between powerful digital brains. As in all battles, the defense holds the advantage of home turf familiarity, and the attacker has the advantage of surprise. If our own computers know us down to the core, they might be able to warn us if there is suspicion that we are being manipulated, and advise us if it appears we are being steered into harm’s way. How strange it will be, when our computers will be on the front lines protecting and exploiting our human vulnerabilities.