2017 in Review: 10 AI Failures

Synced
SyncedReview
Published in
5 min readDec 24, 2017

This year artificial intelligence programs AlphaGo and Libratus triumphed over the world’s best human players in Go and Poker respectively. While these milestones showed how far AI has come in recent years, many remain sceptical about the emerging technology’s overall maturity — especially with regard to a number of AI gaffes over the last 12 months.

At Synced we are naturally fans of machine intelligence, but we also realize some new techniques struggle to perform their tasks effectively, often blundering in ways that humans would not. Here are our picks of noteworthy AI fails of 2017.

Face ID cracked by a mask

Face ID, the facial recognition technique that unlocks the new iPhone X, was heralded as the most secure AI activation method ever, Apple boasting the chances of it being fooled were one-in-a-million. But then Vietnamese company BKAV cracked it using a US$150 mask constructed of 3D-printed plastic, silicone, makeup and cutouts. Bkav simply scanned a test subject’s face, used a 3D printer to generate a face model, and affixed paper-cut eyes and mouth and a silicone nose. The crack sent shockwaves through the industry, upping the stakes on consumer device privacy and more generally on AI-powered security.

Neighbours call the police on Amazon Echo

The popular Amazon Echo is regarded as among the more robust smart speakers. But nothing’s perfect. A German man’s Echo was accidentally activated while he was not at home, and began blaring music after midnight, waking the neighbors. They called the police, who had to break down the front door to turn off the offending speaker. The cops also replaced the door lock, so when the man returned he discovered his key no longer worked.

Facebook chatbot shut down

This July, it was widely reported that two Facebook chatbots had been shut down after communicating with each other in an unrecognizable language. Rumours of a new secret superintelligent language flooded discussion boards until Facebook explained that the cryptic exchanges had merely resulted from a grammar coding oversight.

Las Vegas self-driving bus crashes on day one

A self-driving bus made its debut this November in Las Vegas with fanfare — resident magicians Penn & Teller among celebrities queued for a ride. However in just two hours the bus was involved in a crash with a delivery truck. While technically the bus was not responsible for the accident — and the delivery truck driver was cited by police — passengers on the smart bus complained that it was not intelligent enough to move out of harm’s way as the truck slowly approached.

Google Allo responds to a gun emoji with a turban emoji

A CNN staff member received an emoji suggestion of a person wearing a turban via Google Allo. This was triggered in response to an emoji that included a pistol. An embarrassed Google assured the public that it had addressed the problem and issued an apology.

HSBC voice ID fooled by twin

HSBC’s voice recognition ID is an AI-powered security system that allows users to access their account with voice commands. Although the company claims it is as secure as fingerprint ID, a BBC reporter’s twin brother was able to access his account by mimicking his voice. The experiment took seven tries. HSBC’s immediate fix was to establish as account-lockout threshold of three unsuccessful attempts.

Google AI looks at rifles and sees helicopters

By slightly tweaking a photo of rifles, an MIT research team fooled a Google Cloud Vision API into identifying them as helicopters. The trick, aka adverse samples, causes computers to misclassify images by introducing modifications that are undetectable to the human eye. In the past, adversarial examples only worked if hackers know the underlying mechanics of the target computer system. The MIT team took a step forward by triggering misclassification without access to such system information.

Street sign hack fools self-driving cars

Researchers discovered that by using discreet applications of paint or tape to stop signs, they could trick self-driving cars into misclassifying these signs. A stop sign modified with the words “love” and “hate” fooled a self-driving car’s machine learning system into misclassifying it as a “Speed Limit 45” sign in 100% of test cases.

AI imagines a Bank Butt sunset

Machine Learning researcher Janelle Shan trained a neural network to generate new paint colors along with names that would “match” each colour. The colours may have been pleasant, but the names were hilarious. Even after few iterations of training with colour-name data, the model still labeled sky blue as “Gray Pubic” and a dark green as “Stoomy Brown.”

Careful what you ask Alexa for, you might get it

The Amazon Alexa virtual assistant can make online shopping easier. Maybe too easy? In January, San Diego news channel CW6 reported that a six-year-old girl had purchased a US$170 dollhouse by simply asking Alexa for one. That’s not all. When the on-air TV anchor repeated the girl’s words, saying, “I love the little girl saying, ‘Alexa order me a dollhouse,’” Alexa devices in some viewers’ homes were again triggered to order dollhouses.

Journalist: Tony Peng | Editor: Michael Sarazen

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global