How a funny subreddit helps explain machine learning

Aaron Edell
Mar 23, 2018 · 3 min read

When it comes to face detection, there is nothing more frustrating (or hilarious) than detecting a face in some inanimate object (or animal).

When this happens, it is called a false positive. It positively identified a face, falsely. Sometimes it is a bit confusing keeping track of the differences between false positives, true positives, false negatives, and true negatives.

This example really helps me clear it up. If you’re taking some kind of home medical test, you can classify the situation into one combination of these four categories. You have the disease, you don’t have the disease, the test says you have the disease, the test says you don’t have the disease.

In medical diagnosis, the most dangerous combination is; you have a disease but the test says you don’t.

In face detection (which is far less life threatening… probably) the same categories apply; there is a face, there isn’t a face, the software detected a face, and the software did not detect a face.

Anecdotally, we seem to be more forgiving when a face detection system doesn’t detect a face. Usually (but certainly not always) it is because the face is turned to the side, obscured in a shadow, blocked by an object or some other reason our human brains understand. What we have trouble with is understanding why some inanimate object gets confused as a face.

This is perfectly summarized by one of my new favorite subreddits: r/InanimateFaceSwap

This is happening because whatever face detection technology these apps are using is returning false positives. That is, they are detecting faces where there are none. In this particular application, it’s coming back with some hilarious results.

But it is also an important lesson for the general understanding of machine learning and some of the challenges you’ll face implementing it.

We don’t know why it is detecting a face in those inanimate objects. The recognition it is using is different than the one our human brains use. Sometimes it works really well, and other times it fails. Just like us. Although, it fails at things we’d be good at (like telling the difference between a cookie and a human face) it would out perform us in both speed and maybe accuracy at a task like counting faces in 1000 photos of crowds.

Whenever I’m advising customers on integrating my company’s machine learning tools (like face detection and recognition), I always try and prepare them for the inevitable false negatives and positives that come with the territory, because handling these conditions elegantly is the key to a successful implementation of AI in your business.

Data Driven Investor

from confusion to clarity, not insanity

Aaron Edell

Written by

Co-founder Machine Box (exited)| Machine Learning Superfan | Business Development | Agile Product Owner | Author | Father | Amateur Programmer

Data Driven Investor

from confusion to clarity, not insanity

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade