Creepiest Stories in Artificial Intelligence (AI) Development

Scary things about AI: from virtual cannibalism to racist monsters

Luba Pavlova
Cube Dev

--

If you’re not scared of what AI has done so far, you probably will be after reading this article. The Statsbot team has collected the most landmark and creepy stories about AI all in one place. If you have something to add, leave your story in the comments.

Boo! 👻

Virtual cannibalism

This story was told to us by Mike Sellers, who was working on some social AI for DARPA in the early 2000s. They were making agents that learned to interact socially together.

“For one simulation, we had two agents, naturally enough named Adam and Eve. They started out knowing how to do things, but not knowing much else. They knew how to eat for example, but not what to eat. We’d given them an apple tree (the symbolism of that honestly didn’t occur to us at the time), and they found that eating the apples made them happy. They had also tried eating the tree, the house, etc., but none of those worked. There was also another agent named Stan, who wanted to be social but wasn’t very good at it, so was often hanging around and kind of lonely.

“And of course, there were a few bugs in the system.

“So at one point, Adam and Eve were eating apples… and this is where the first bug kicks in: they weren’t getting full fast enough. So they ate all the apples. Now, these agents learned associatively: if they experienced pain around the time they saw a dog, they’d learn to associate the dog with pain. So, since Stan had been hanging around while they were eating, they began to associate him with food (bug #2 — you can see where this is going).

“Not long into this particular simulation, as we happened to be watching, Adam and Eve finished up the apples on the tree and were still hungry. They looked around assessing other potential targets. Lo and behold, to their brains, Stan looked like food.

“So they each took a bite out of Stan.

“Bug #3: human body mass hadn’t been properly initialized. Each object by default had a mass of 1.0, which is what Stan’s body was. Each bite of food took away 0.5 unit of mass from whatever was being eaten. So, when Adam and Eve both took a bite of Stan, his mass went to 0.0 and — he vanished. As far as I know, he was the first victim of virtual cannibalism.

“We had to reconstruct some of this after the fact from the agents’ internal telemetry. At the time it was pretty horrifying as we realized what had happened. In this AI architecture, we tried to put as few constraints on behaviors as possible… but we did put in a firm no cannibalism restriction after that: no matter how hungry they got, they would never eat each other again.

“We also fixed their body mass, how fast they got full, and changed the association with someone else from one of food to the action of eating: when you have lunch with someone often you may feel like going to eat when you see them again — but you wouldn’t think of turning them into the main course!”

Are you sure you’re not a gay?

Recently, machine learning society actively discussed the face recognition algorithm, which can distinguish gay from hetero with an accuracy of up to 91%. The model performed worse with women, telling gay and straight apart with 71% accuracy.

In terms of technology this project is rather interesting. The dataset was taken from a dating site. Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style).

Illustration source

But we can’t rely on neural networks in the question of probability. An error rate of 10% at best is high. Moreover, the result in 91% and 71% was obtained only by comparing two photos, one of which is exactly gay, and the second is definitely not. Oh, I didn’t mention the results were given inside the laboratory. If you release the algorithm into a natural environment, the probability will be much less.

In terms of ethics, many people call this project a failure. Especially considering that in some countries non-traditional orientation is illegal. If the government decides to launch this system in crowded places, not only gay men will be at risk.

Racist and sexist monster

Many of you know the story of Tay.ai, but did you know it revived after the shutdown? Microsoft launched the AI-powered bot, called Tay, in 2016. It was hidden behind the avatar of a 19-year-old girl. The idea was that Tay would respond to tweets and chats and learn from the general public’s tweets. But something went wrong, and after 16 hours of the launch, Tay turned into a racist and sexist monster.

Actually, Tay was able to handle a variety of tasks, for example, joking with users, suggesting comments to the pictures you send her, telling stories, playing a game, and mirroring users’ statements back to them. Naturally, one of the first things online users taught Tay was how to make offensive and racist statements. Microsoft had to bring it offline, and Tay became kind of an AI legend.

However, one week later, Tay came back. She surprisingly came online and started posting drug related tweets, showing that her dark side was alive. Soon, she went offline again, and her account became private.

BTW, previously, Microsoft had launched a schoolgirl chatbot called Rinna. She fell into a deep depression and…began fielding some questions about Hitler.

Destroying the competition

We found this story on Quora and it belongs to Shay Zykova, ESL teacher from Hawaii. The events were unfolding at the robotics contest on some college campus.

“Each team had designed a robot whose job it was to “herd” little robotic sheep into the robot’s designated pen. The robot had to “think” and strategize for itself (So it couldn’t be controlled by a joystick), and the robot with the most sheep at the end would be the winner.

“The contest started and the robots frantically started collecting sheep. One robot flung a sheep into his pen and shut the gate. Its team was confused, because it needed more to win. Then, to their horror, the robot went around destroying or immobilizing the other robot contestants.

“It strategized that it didn’t actually need to be good at herding sheep, it only needed to eliminate the competition in order to win.”

I think some people behave the same way, when they can’t achieve what they want on their own.

Pause not to lose

In 2013, programmer and CMU PhD, Tom Murphy, presented a program that “solves” how to play an NES game, like it’s just another kind of mathematical problem. The idea was that the program would do things that increased the score, and then learn how to reproduce them again and again, resulting in high scores.

The basic idea is to deduce an objective function from a short recording of a player’s inputs to the game. The objective function is then used to guide a search over possible inputs, using an emulator. This allows the player’s notion of progress to be generalized in order to produce novel gameplay.

It worked quite good for different games such as Super Mario Bros., Bubble Bobble, Pac-Man, etc. In Tetris, though, the method failed completely.

Murphy writes:
“Although the piece dropping looks more or less natural (but it’s hard to not be, as the game drops the pieces for you), the placement is idiotic — worse than random. This may be because placing a piece gives you a small amount of points, which probably looks like progress, so there is incentive to stack the pieces as soon as possible rather than pack them in.”

Soon, the screen filled up, and as long as “the only way to the win the game is not to play,” the AI paused the game. Forever.

i can i i everything else

Facebook abandoned their experiment after two artificially intelligent programs suddenly started chatting to each other in a strange language only they understood.

The idea was to develop chatbots that are able to hold a multi-issue bargaining in natural language.The researches challenged them to try and negotiate with each other over a trade, attempting to swap hats, balls, and books, each of which were given a certain value.

It’s not so easy to build dialogue systems that can hold meaningful conversations with people. A bot needs to combine its understanding of the conversation with its knowledge of the world, and then produce a new sentence that helps it achieve its goals. In this research, the complex architecture of the dialogue system was replaced by a trained RNN and the dialogue policies were trained by reinforcement learning.

After the team let the bots chat in natural language, the actual negotiations between them appeared very odd:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

The interesting thing is that some of the negotiations that were carried out in this bizarre language even ended up successfully concluding. Chatbots developed their own language which allowed them to talk more effectively. Since the researchers wanted to create bots who could talk to people, they decided to abandon this project.

--

--