“grayscale photo of man using magnifying glass” by mari lezhava on Unsplash

Do AI practitioners fundamentally understand intelligence?

Alex Rallis
Sep 6, 2018 · 3 min read

As I reflected on a lecture I recently gave on behalf of insite.ai at PwC Australia headquarters in Melbourne on the upcoming revolution of retail through the use of AI, it occurred to me that AI is fundamentally misunderstood. Many of our competitors at insite.ai in the retail AI space seem to believe that AI is a data science tool, ready to be plugged in with more and more data sources drawn upon and more and more AI models pieced together in a Frankensteinian melting pot of AI. I even came across this quote from Celect, who seem to believe that retail would get along just fine if only it ripped up what has come before and started again with AI.

“The biggest problem with AI in retail is [humans] don’t exactly know how to leverage it or what is right… The biggest problem facing AI is humans.”

The thing is, humans aren’t the problem. Humans are what train AI models, giving AI models their efficiency and accuracy. Much like the various revolutionary technologies before it, AI only works by democratising inefficient (relative to the new) and inconsistent human processes in a more efficient way: where the printing press allowed printing of manuscripts to become consistent and massively increased efficiency; where the production line increased the efficiency and consistency of production; where the internet democratised the knowledge transfer process and increased baseline human knowledge; AI is set to increase the efficiency of transfer of human learning. Without humans, no transfer of learning is possible, never mind efficient. Capiche?

When digging up the views of other competitors in the retail AI sphere, I found this quote headlining one of our competitors’ decks:

The secret of change is to focus all of your energy not on fighting the old, but on building the new — Socrates”

Essentially they too want to rip up the past and start again, which aside from being mind-blowingly arrogant, is just plain wrong. (An aside: the irony of using a philosopher who lived 2500 years ago to illustrate that the future is more important than the past was not lost on me, although through about 5 seconds of Googling I discovered that “Socrates” is not an ancient Greek philosopher but a character in a 1970s novel). AI is not about ripping up the past and starting again, it fundamentally is about taking the learnings of humans and applying that in a democratic and (r)evolutionary manner to new pursuits.

While it’s been said many times before, it’s important to remember AI and data science are simply not the same. Having data science endeavours masquerade as intelligence muddies the waters. We need to learn from humans in an intelligent manner, and sometimes not in a manner that leads well to simply ingesting data and giving a recommendation — real people are needed in the middle. As the great Nietzche said (the real one this time) “just because something is unintelligible to me does not mean its not intelligent”. Just because we don’t have data readily available on everything doesn’t mean that everything we’re missing is unimportant. Through AI (real AI) we can properly leverage human insights that properly leverage highly unstructured data sources. And nobody is in a better position to provide us with those insights than humans.

Data Driven Investor

from confusion to clarity, not insanity

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade