The Simple Sniff Test for Detecting Bullshit AI for Marketing

John Davies
Simon Data
Published in
4 min readOct 2, 2019

AI is frequently presented as a near-magical solution to a host of business — and even political or world-historical — problems. However, it’s also an arcane, complex field — much of it developed by teams of PhDs armed with very powerful software. And when you combine the ambitious promise of AI with that level of complexity, buyer beware: you’re looking at a recipe for getting sold on bullshit. (Bullshit in the formal, philosophical sense, of course.)

If you’re not a data science PhD, and you need to make buying decisions about AI, it’s important to know how you can spot the real thing through the smokescreen. Unfortunately, sometimes, you simply can’t. That’s when it’s time to bring in your data science teams to help you with due diligence. However, even the layperson can do a lot to weed out false promises and snake oil. To understand how to spot fake AI — or at least to help you steer clear of the false promises — it helps to start with a basic understanding of what questions the AI you’re evaluating is and is not built to answer. In other words, you need to know all the ways that AI is smart, and also all the ways that it’s quite dumb.

AI is smart in that it can crunch a lot of data — to the tune of millions to trillions of data points — very quickly. And it can use that data to make connections that are only possible by poring through such a vast quantity of information. This ability to make connections is what enables it to answer a wide variety of questions: If people like one movie, what are other movies they are likely to enjoy? What songs belong on your ideal workout playlist? What kinds of products do pregnant customers buy?

These questions take a lot of data to answer, but they’re also extremely straightforward. They look at a clear set of factors in search of very specific relationships — relationships which make sense upon inspection. In the world of marketing and customer relations, two more relevant examples might be: “What’s the optimal number of characters in an email subject line if you aim to maximize open rates? Too long and customers’ eyes glaze over; too short and they don’t know what the email is about”, and “What’s the optimum coupon discount to maximize sales? Too low and our customers don’t care; too high and they perceive the product as less valuable.”

These kinds of straightforward questions are what today’s AI is designed for. What AI has a lot more trouble with are complex, open-ended decisions. Often, these require models built on latent factors–webs of unobserved and deeply-nested dependencies. For instance, I’m deeply skeptical of AI software that makes specific predictions about how a customer will interact with a company over an extended period of time — whether that’s targeting churn predictions 12–18 months in the future, or predicting lifetime value to the dollar. These are essentially predictions of whether and how much a customer will purchase once, how much they will enjoy that purchase, whether that experience will convince them to buy again, and then that whole sequence again, and then yet again ad infinitum. There are just too many factors and dependencies involved to build a straightforward model. And if you can’t build a straightforward model to get to the answer, you’re probably asking a question that’s too complex for AI.

There’s a simple test that you can use to sniff out the proverbial bullshit that some AI vendors are shoveling. Start with the question you’re evaluating, and map out how you’d go about answering the question — build out a model as the data scientists would say. Can you come up with a straightforward process for finding the answer — one that you can sketch out on the back of a napkin? The process should be simple to describe even if it would take you forever to actually do. –All you should need is a stream of data to run through the system, and you’d be able to see answers at the other end.

If your problem fits that simple mold, then it may be a candidate for AI. If your napkin sketch ended up taking three napkins and most of the placemat, you’re likely trying to solve a problem that AI can’t crack.

That’s not to say that you’ll come up with the correct model for solving the problem — after all, you’re not an AI expert. Likewise, some of the almost-too-good-to-be-true AI solutions might actually be able to deliver on their promise. But the simplicity test is a simple way to bring some healthy skepticism into your AI buying process. And it can go a long way toward helping you separate true artificial intelligence from artificial bullshit.

To learn more about what grounded AI solutions should look like, visit our part one of our blog series or to find out more about Simon Data’s AI-powered solutions, visit our platform solutions page here.

--

--