Pitfalls of Artificial Intelligence (AI) — A Survey conducted on 5 Case Studies

Aswin Vijayakumar.
Nerd For Tech
7 min readDec 29, 2020

--

5 Case Studies were Asked in a Survey among Peers at an educational programme

Introduction

Artificial Intelligence has pitfalls related to its ethics and bias, its implementation enterprise-wide, and making people used to its evolution in the market. The Pitfalls happen due to their maturity in any of its stages.

The format shown here below refers to a question and answer session with a question as a descriptive text and a single line question seeking for a term or phrase that the answer refers to.

Case Study 1

Watson for Oncology

IBM’s Watson for Oncology is an AI machine for dealing with cancer treatments and solve healthcare problems using Machine Learning. Watson for Oncology was quoted by PaulvanderLaken.com as a Biased and Unproven Recommendation System.

I happened to read about it in SearchDataManagement.com and my first question on the Case Study has been presented below.

Title:

Fail: IBM’s “Watson for Oncology” Cancelled After $62 million and Unsafe Treatment Recommendations [https://searchdatamanagement.techtarget.com/feature/Why-you-should-consider-a-machine-learning-data-catalog]

Description:

According to StatNews, the documents (internal slide decks) largely place the blame on IBM’s engineers. Evidently, they trained the software on a small number of hypothetical cancer patients, rather than real patient data.

The result? Medical specialists and customers identified “multiple examples of unsafe and incorrect treatment recommendations,” including one case where Watson suggested that doctors give a cancer patient with severe bleeding a drug that could worsen the bleeding.

Question Asked:

What software system are they talking about in the case?

Rationale:

To convey the correct terminology for the enterprise application integration system.

57.1% answered correctly as Oncology Expert System.

Right Answer:

Watson for Oncology Question 1 — Case Study

Over 15 Responses, from the Questionnaire, the results are shown below:

Watson for Oncology Answer— Case Study

Case Study 2

Microsoft Chatbot

Microsoft AI Chatbot Tay was released by Microsoft Corporation via Twitter on March 23, 2016. Within less than 24 hours internet trolls had completely corrupted Microsoft Chatbot Tay. It was flooded with racist, misogynistic, and anti-semitic tweets and the chatbot was described as a robot parrot with internet connection. @Lexalytics

Title:

Fail: Microsoft’s AI Chatbot Corrupted by Twitter Trolls

Microsoft made big headlines when they announced their new chatbot. Writing with the slang-laden voice of a teenager, Tay could automatically reply to people and engage in “casual and playful conversation” on Twitter.

Description:

Tay grew from Microsoft’s efforts to improve their “conversational understanding”. To that end, Tay used machine learning and AI. As more people talked with Tay, Microsoft claimed, the chatbot would learn how to write more naturally and hold better conversations.

Less than 24 hours after Tay launched, internet Trolls had thoroughly “corrupted” the chatbot’s personality.

By flooding the bot with a deluge of racist, misogynistic, and anti-semitic tweets, Twitter users turned Tay — a chatbot that the Verge described as “a robot parrot with an internet connection” — into a mouthpiece for a terrifying ideology.

What did Microsoft Claim to be the reason for failure?

Question Asked:

What did Microsoft Claim to be the reason for failure?

Rationale:

To communicate about what is spoken in public by firms.

57.1% answered correctly as Loss of Relevant Public Data

Right Answer:

Microsoft Chatbot Question 2— Case Study

Over 15 Responses, from the Questionnaire, the results are shown below:

Microsoft Chatbot Answer — Case Study

Case Study 3

Apple’s Face ID Defeated

In 2017, Hackers broke the IPhone X’s Face ID using a 3D Printed Mask. The company’s futuristic new form of authentication was fooled and someone’s face was duplicated to unlock an IPhone X. @Wired

Title:

Fail: Apple’s Face ID Defeated by a 3D Mask

Apple released the iPhone X (10? Ten? Eks?) to mixed, but generally positive reviews. The phone’s shiniest new feature was Face ID, a facial recognition system that replaced the fingerprint reader as your primary passcode.

Description:

Apple said that Face ID used the the iPhone X’s advanced front-facing camera and machine learning to create a 3-dimensional map of your face. The machine learning/AI component helped the system adapt to cosmetic changes (such as putting on make-up, donning a pair of glasses, or wrapping a scarf around your neck), without compromising on security.

But a week after the iPhone X’s launch, hackers were already claiming to beat Face ID using 3D printed masks. Vietnam-based security firm Bkav found that they could successfully unlock a Face ID-equipped iPhone by glueing 2D “eyes” to a 3D mask. The mask, made of stone powder, cost around $200. The eyes were simple, printed infrared images.

Question Asked:

How was this achieved?

Rationale:

To produce correct definitions for Neural Networks that are used in businesses.

15.4% answered correctly as Sophisticated anti-spoofing neural networks

Right Answer:

Apple Face ID Defeated Question 3— Case Study

Over 15 Responses, from the Questionnaire, the results are shown below:

Apple Face ID Defeated Answer — Case Study

Case Study 4

Amazon AI For Recruitment

In 2018, Amazon scraps secret AI recruiting tool that showed bias against women @Reuters

Title:

Fail: Amazon Axes their AI for Recruitment Because Their Engineers Trained It to be Misogynistic

Description:

Artificial intelligence and machine learning have a huge bias problem. Or rather, they have a huge problem with bias. And the launch, drama, and subsequent ditching of Amazon’s AI for recruitment is the perfect poster-child.

Amazon had big dreams for this project. As one Amazon engineer told The Guardian in 2018, “They literally wanted it to be an engine where I’m going to give you 100 résumés, it will spit out the top five, and we’ll hire those.”

But eventually, the Amazon engineers realized that they’d taught their own AI that male candidates were automatically better.

How did this AI fail happen? In short, Amazon trained their AI on engineering job applicant résumés. And then they benchmarked that training data set against current engineering employees.

Now, think about who applies for software engineering jobs. And who is most-likely to be currently-employed in software engineering? That’s right: white men.

So, from its training data, Amazon’s AI for recruitment “learned” that candidates who seemed whiter and more male were more-likely to be good fits for engineering jobs.

Question Asked:

What type of bias does this problem show?

Rationale:

To popularize the term group attribution in relevant data bias terms.

23.1% answered as Group-Attribution Bias

Right Answer:

Amazon AI For Recruitment Question 4— Case Study

Over 15 Responses, from the Questionnaire, the results are shown below:

Amazon AI For Recruitment Answer — Case Study

Case Study 5

Amazon Facial Recognition

In 2018, Amazon’s facial recognition technology falsely identified 28 members of Congress as people who have been arrested for crimes, according to the American Civil Liberties Union (ACLU). @TheGuardian

Title:

Fail: Amazon’s Facial Recognition Software Matches 28 U.S. Congresspeople with Criminal Mugshots

Amazon’s AI fails don’t stop there. In 2018, the American Civil Liberties Union showed how Amazon’s AI-based Rekognition facial recognition system

Description:

According to the ACLU, “Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress.”

In fact, that’s not even the first time someone’s proven that Rekognition is racially biased. In another study, University of Toronto and MIT researchers found that every facial recognition system they tested performed better on lighter-skinned faces. That includes a 1-in-3 failure rate with identifying darker-skinned females. For context, that’s a task where you’d have a 50% chance of success just by guessing randomly.

This is, of course, horrifying. It’s not even an “AI fail” so much as a complete failure of the systems, people and organizations that built these systems.

I wish I could say that, faced with incontrovertible proof that they did a bad thing, Amazon did what they needed to fix their AI bias. But the story doesn’t end here. Law enforcement agencies are already trying to use tools like Rekognition to identify subjects. And despite these demonstrated failures — it’s algorithmic racism, really — Amazon isn’t backing down on selling Rekognition.

Seriously, just read this article from The Guardian: How white engineers built racist code — and why it’s dangerous for black people

Question Asked:

What bias is Amazon being spoken about?

Rationale:

To demonstrate the extent of bias that happens in AI.

84.6% answered correctly as Racial Bias.

Right Answer:

Amazon Facial Recognition Question 5— Case Study

Over 15 Responses, from the Questionnaire, the results are shown below:

Amazon Facial Recognition Answer — Case Study

Conclusion

On an average, 47.46% answered correctly the Case Study Survey. The case study survey lasted up to 1 week of duration and out of which 15 people answered. Qualitatively we were able to question the relevance of each case study. These 5 case studies have appeared from the year of 2016 to the year of 2018 and in major websites.

--

--

Aswin Vijayakumar.
Nerd For Tech

Project, technical details and standards for Computer Vision and Data Science. Contact: aswinkvj@klinterai.com.