AI Mind Control

Klaus Æ. Mogensen
FARSIGHT
Published in
4 min readJun 7, 2021
The article is originally published in SCENARIO Magazine issue 60, 2021. Go visit cifs.dk/publications/magazine.

There is an old saying that for every one hundred people, ninety are sheep, nine are wolves trying to eat the sheep, and one is a shepherd trying to protect the sheep from the wolves. The point is that the majority of people are easy victims of hustles devised by unethical crooks who understand how to manipulate the ‘sheep’ to their own ends, whether it involves swindling them out of money, making them support a cause that mainly benefits the ‘wolves’, or taking sexual advantage of them, unless some hard-working ‘shepherd’ manages to protect them — an often thankless job. The wolves tend to be a bit smarter (or less naïve) than the sheep, understanding how to manipulate the sheep’s emotions or their trust without giving away their real goal, which is rarely to the benefit of the sheep. It is a kind of mind control, in which people are made to think certain thoughts and perform certain acts that the manipulator wants them to. We have seen recently in the US how effective this can be, with a large number of people having been persuaded to believe in conspiracy theories that benefit a certain political agenda, and even persuaded to attempt the violent overthrow of a democratically elected government.

Some people are demonstrably able to exercise ‘mind control’ over a lot of other people to do their bidding, and even get them to think they are doing it of their own volition. Now imagine that in the future, artificial intelligence will be able to do the same thing, only far more effectively. Not a nice thought, is it? Unfortunately, this future may not be all that far away. We have already seen how chatbots have become very effective at spreading fake news and conspiracy theories and, not least, persuading people that the stories are true; but this is just the beginning. After all, chatbots are only as good at manipulation as the people who program them, whereas artificial intelligence could potentially become superhumanly good at manipulating us. In fact, in some very specific situations, they have already proven to be skilled manipulators.

In an experiment, Australian researchers developed deep-learning neural networks (a version of AI) that were tested against human test subjects in various tasks. In one of them, the test subject was shown various coloured figures in a sequence selected by the AI and was tasked to press a button when, for example, an orange triangle was shown, but not when other combinations of colour and shape were shown. During the experiment, the AI learned to present the figures in an order that increased the test subject’s mistakes by 25 percent — a feat that a human manipulator would probably find difficult to duplicate. In other tests, the human test subject played an investor giving money to the AI, which would produce a return of investment, which in turn would determine the money available for possible investment in the next round. In one version of the test, the AI was tasked to maximise its own profits, while in the other, it aimed for a fair distribution of profits between itself and the investor. According to the researchers, the AI was “highly successful” at both tasks.

While these situations and others in the experiment were abstract and fairly simple, the experiment did show that the AI, through trial and error, became very good at steering the test subjects into making decisions that the AI benefited from, according to its predetermined success criteria. With further development and more advanced neural networks, this manipulation could feasibly be applied to more complex real-life situations, nudging people towards taking certain actions and decisions. While this will no doubt be used by human wolves to their own ends, such a technology could also be used in positive ways, for example to nudge people towards more climate-friendly behaviour or healthier lifestyles. It could also be employed to warn users when they are being manipulated online and steer them towards actions that minimise the risk of being manipulated. The question remains, however, whether AI will be employed more and with better success by wolves trying to hustle people, or by shepherds trying to protect people from being hustled. Judging by the overall limited success of measures employed today against fake news, conspiracy theories and other hustles, it may be an uphill battle. Perhaps most worrying is that as AI gets better at manipulating us, even those of us who are today skilled at identifying thought manipulation and phishing attempts may become fooled by the more subtle manipulation by future AI. And then where will we be?

Visit cifs.dk/publications/magazine. Cover photo: Minik Rosing by Ken Hermann.

--

--