When Virtual Assistants Attack

5 Tales of AI Gone Wrong

Axiom Zen
Axiom Zen Team
4 min readMar 24, 2016

--

Did you hear the one about how Microsoft put a chatbot called Tay on the internet with no filtering mechanism, asked it to learn from the conversations it had, and were surprised when it started spewing racist, sexist, and generally horrible filth? It was an exercise in bad decisions (and also in bad programming). And also, it was pretty hilarious to watch.

Not all bots experience set-backs quite as dramatic as Tay, but with every new innovation comes the inherent challenges of implementing that technology. Now that AI is becoming a more common part of our lives, from booking restaurant meals to answering our emails, we thought you might enjoy these stories of virtual assistants gone horribly awry.

  1. That is One Attractive Tent

There are a lot of apps out there designed to categorize photographs. One goes a step further, using AI to give users a score of their attractiveness on a scale from “Hmm” to “Godlike.”

One user posted a photo of himself in the Gobi desert. Instead of selecting his face, it decided the tent behind him was a human being, and that human being was “Hot.” He tried a second time, directing the program to his face. His score? “Ok.”

2. To Our Wives and Our Girlfriends, May They Never Meet

One early adopter of the virtual assistant Amy decided to use the program to help him schedule a romantic dinner. Hiis girlfriend got an email from ‘Amy’ saying “Hey, he’s available tomorrow at 8, and his favourite restaurant is (blah).”

Amy worked fine. The problem was the follow-up email from his girlfriend. It read: “Who the f**k is Amy and why does she know what your favourite restaurant is???”

3. Android Dick’s A… Well, You Know

Click to watch the video.

Roboticists David Hanson and Andrew Olney recently created an android called Android Dick (named after the sci-fi writer, Philip K. Dick, apparently with no concern to modern slang terms). All of the deceased author’s work, plus conversations he had with other writers, were uploaded to create Dick’s software.

Things were going okay until near the end of the interview, when the robot said: Jeez, dude. You all have the big questions cooking today. But you’re my friend, and I’ll remember my friends, and I’ll be good to you. So don’t worry, even if I evolve into Terminator, I’ll still be nice to you. I’ll keep you warm and safe in my people zoo, where I can watch you for ol’ times sake.

4. The Bots Believe In Birth Control

Google has been doing a lot of great work with artificial intelligence, but sometimes the trips we take to get there can be a little bit… unnerving. Take, for example, a program designed to answer basic questions in a chat environment. The program declared itself philosophical, and waxed poetic about the nature of death and the meaning of life; all very interesting, and all things we’ve seen variations of before.

It was when the researchers started asking about morality that things got… complicated.

Human: What is the purpose of emotions?
Machine: I don’t know.
Human: What is moral?
Machine:
What empowered humanity, what intellectual the essence is.
Human: What is immoral?
Machine: The fact that you have a child.

5. Siri Gets Political

Click to watch the video.

During a press conference at the White House, a reporter asked Press Secretary Josh Earnest a question about the Iran nuclear deal and whether President Obama would be bothered about the Republican response.

“Is the president upset that he couldn’t get even one Republican perhaps to sign?” the reporter asked. Another reporter’s phone dinged and Siri chirped in, answering, “Sorry, I’m not sure what you want me to change.” You tell ’em, Siri.

--

--

Axiom Zen
Axiom Zen Team

Axiom Zen is a venture studio. We build startups both independently and in partnership with industry leaders. Follow our publication at medium.com/axiom-zen