9 Times Artificial Intelligence Failed And What We Can Learn As Marketers

Meg Grasmick
Trapica
Published in
7 min readNov 5, 2019
Orange and white robot standing among cars
Photo by NeONBRAND on Unsplash

Artificial Intelligence is a buzzword that turns virtual digital assistants like Alexa into a literal household name. Whether we fear or embrace it, the technology is still in its infancy and has the potential to create problems as well as solve them. While AI can improve marketing performance, understand voice commands and is even replacing us behind the wheel, we have to be sensitive to its limitations. AI is useless if not paired with human creativity, ingenuity and empathy. Moreover, it’s not “set-and-forget.” Unbeholden to its simplistic Sci-Fi representation, AI must be constantly updated lest humanity become subject to its potential train wrecks.

Here are nine times artificial intelligence failed, and what we can learn.

  1. Watson for Oncology

One of the greatest AI fails of all time was Watson for Oncology. A team of engineers from IBM designed this software with the far-reaching goal of eradicating cancer. The objective couldn’t be further than the result. Watson began making incorrect and dangerous treatment suggestions. It soon became apparent that the AI did not know what it was doing. So who is to blame? The error largely traced back to the engineers that programmed it. The decisions the software used to build patterns were based on imaginary patients instead of real cases, leading to frequent error.

Lesson: Base your algorithm/software on the grounds of logic, scalability and attainability.

2. Microsoft’s AI Chatbot Corruption

Microsoft’s AI fail may be slightly less terrifying than Watson’s, but certainly more embarrassing. In an effort to create a relatable experience via chat, they created a chatbot that essentially spoke like a teenager—or at least tried to. The Internet (the trolls, that is) decided it would be funny to take advantage of the bot’s machine learning and teach it dangerous and inappropriate rhetoric.

Lesson: Anticipate trolling. These people might outwit you and corrupt your machine learning model.

3. Face ID Unlocked by 3D Masks.

Apple gets a lot right, but even they can be outsmarted. Not too long ago, Apple came out with its Face ID feature to make logging into your iPhone more convenient by eliminating the passcode requirement. Apple claimed that Face ID would guard against spoofing, but a Vietnam-based security firm decided to test the limits and override the locked setting with creepy 3D masks. While there is certainly skepticism about this claim, the fact that even the most secure and thoroughly tested forms of AI can be breached is a lesson to us all.

Picking an avatar on iPhone
Photo by Szabo Viktor on Unsplash

Lesson: Prioritize privacy and stay aware of what hackers are doing so you can improve the security you offer your customers.

4. Chinese Billionaire, or Jaywalker?

Facial recognition issues are a theme in China, too, where laws and regulations are far more severe than in the U.S. For example, face recognition technology is now being applied to jaywalkers in major Chinese cities to reduce accidents and maintain law and order. The only problem is that this AI does not have depth perception — in other words, it can’t tell the difference between 3D and 2D images of humans. As a result, a Chinese billionaire was falsely accused of jaywalking because of an image of him the technology picked up on the outside of a bus ad. The man wasn’t too concerned about the matter and it was resolved quickly, but there is certainly something to be learned here.

Lesson: Recognize and quickly address algorithm error in areas like live detection, where important differences are not totally understood by machines.

Man holding apple in one hand and bread in the other
See how your Ads performance convert in 60 seconds!

5. Uber’s Self-Driving Car Experiment Causes Harm

There are worse examples of AI on our streets than jaywalking. In 2018, a self-driving Uber car failed to account for a woman riding her bike on the roadside in Tempe, Arizona. The cyclist did not survive the accident. The vehicle was in autonomous mode, which meant the emergency braking system was not able to function.

Out of three modules — perception, prediction and response — perception is the most difficult for self-driving AI. This system initially registered the woman as a foreign object, then a vehicle, and finally a bike. About a second before the crash, the car realized the emergency brake was needed, but the mode did not support this particular function. The human monitoring the drive was looking at the display screen and didn’t realize what was happening until it was too late.

The company halted the testing of these cars for several months, but where the technology goes from here remains to be seen.

Lesson: Of all the areas AI must be trained in, human safety should be a top priority. Both humans and machines must perform at their best.

6. Amazon Flirting with Gender Bias?

Amazon prides itself on equality, diversity and inclusion. It may come as a surprise, then, that they created an AI that evolved into a sexist hiring manager. The tool filtered through resumes to suggest the best possible employees. However, the system analyzed resumes from across the last decade, when the hiring skewed male. As a result, it showed a preference for male candidates. Although it was never used to select employees, it still serves as an example of how AI is still in its nascency with a lot to learn about how to serve all populations fairly.

Lesson: Remember that cultural sensitivities are not yet ingrained in our current AI.

Read More: The future Online Retail — advertising platforms

7. Deep Fakes are Dangerous, to Celebrities and All of Us

In the last few years, the rise of deep fakes has all of us wondering about how unseemly characters might do damage to people’s reputations. It wasn’t long ago that deep fakes surfaced on Reddit. The more images you have out there on the web, the easier it is for hackers to make videos that appear to feature you in them. The implications range from good fun to the worst kind of fake news.

Lesson: If videos start losing credibility, we all have to start thinking about how to protect our virtual identities.

Social Media targeting made simple

8. Ad Fraud Schemers Steal Millions

Ad fraud– it’s relevant, it’s dangerous and many of us have a hard time understanding it at all. Here’s how it works. There are two components of ad fraud: human traffic and non-human traffic. When bots view ads, fake traffic produces impressions and clicks. On the other hand, when human traffic sees them but can’t get full viewability, impressions and clicks are still generated. This has significant implications for advertisers and could cost them as much as $50 billion by 2025.

Let’s look at an extreme example to see how costly and damaging ad fraud can be.

Methbot, an advertising fraud scheme, was active in 2015 and 2016. Methbot used hundreds of thousands of websites and mimicked trusted publisher names to get unsuspecting users to click and watch video ads. It worked. With this giant hack, the creators of Methbot made up to five million dollars per day.

So what are advertisers doing to prevent crimes as severe as this instance from 2015? They are working together across the industry, forming guidelines for processes and best practices, leveraging resources, maintaining regulations and monitoring the situation.

Lesson: Stay aware and proceed with caution. Join the industry’s effort to minimize ad fraud.

9. Google Photos Mistakes Skier for Mountain

To end on a lighter note, let’s look at a humorous AI fail. A lesser-known feature on Google Photos takes background images and merges them into one panorama. A Redditor posted three pictures of a ski trip, two of a mountain landscape, and one of a friend in front of a similar background. Google Photos merged the takes together and produced a photo of his friend that made him look the size of a mountain himself. The Reddit went viral and attracted 200k upvotes.

Mountain-sized man on a ski slope
syncedreview.com

Lesson: AI cannot match human rationality.

Why Does It Matter?

Now that we’ve seen extreme examples of where AI can go wrong (for better or worse), let’s reel it in and talk about why this matters for you. Here are the top three admonitions we have for you when it comes to artificial intelligence marketing and your business.

  1. Be realistic. Leveraging AI marketing technology for your business is sure to reap rewards, but if you don’t know the terrain consider transferring your AI marketing out-of-house. There are great artificial intelligence software options that offer recommendation engines, actionable insights, overlapping audiences for keywords, analytics dashboards and more.
  2. Make sure your data is quality. Insufficient input means poor output. AI won’t be able to help you with low-quality data; in fact, it will point you in the absolute wrong direction compared to where you want to go, which may lead you to make bad decisions for your marketing strategy. Think statistically: confirm accurate normalization, consider missing data and outliers, and eliminate sampling bias.
  3. Foster a working relationship between AI and your employees. AI is still a work in progress. Our decisions should be data-informed, not data-driven. The human intelligence aspect of your organization is essential because it fosters creativity, ingenuity and empathy. It can sometimes even add rationality where AI has blind spots. If you focus on both the human and machine elements of your business, they will work together seamlessly to deliver the best results.

--

--