Artificial Intelligence: Separating Fact from Fiction

What AI is and what it is not

Alexander Katrompas, PhD
Granify
8 min readNov 30, 2017

--

[ Author’s Edit: January 2024: After this, you can also read an update to the state of AI and misperceptions in Part II of this story. ]

Since the late 1960s and the groundbreaking movie “2001: A Space Odyssey” the idea of “learning machines” has slowly crept into our consciousness and almost always in an ominous context: machines are getting too smart for our own good.

Fictional smart machines from Hal in the 1960s to The Gunslinger in the 1970s, The Terminator in the 1980s, and into the new century with the Matrix, all play on the same theme: one day we humans will invent artificial intelligence which will replace us as the dominant life on the planet.

These stories always include one very crucial element, machine learning, or more to the point, machine self-learning. The idea that machines can self-learn has been fascinating and terrifying audiences for more than 50 years, and for almost the same amount of time has been the Holy Grail of business intelligence and data science (and every other science).

Since I took my first course in AI, and thinking machines became my own passion, I’ve noticed that the dreaded “Machines are getting too smart!” article appears in popular press like clockwork every time AI makes the news. Consider the following opinion piece on Fox News, complete with the obligatory picture of the grinning Terminator.

The article makes the usual claims such as modern machines being “not constrained by humans” and alludes to the “beginning of the end.” Of course, it ends with the perfunctory question, “should we be worried?” The truth is these popular press articles have been circulating basically in the same form since Hal first let Dave know, “I’m sorry, Dave. I’m afraid I can’t do that.”

While these posts are entertaining, are they accurate? Hardly.

AI Reality vs Fiction

Machine self-learning is what fires up imaginations and leads to the articles such as the aforementioned Fox News post. While we wait for our robot overlords to take control, let’s look at the article’s claims and sort them out according to the science of AI and its practicality to business.

Claim 1: “No human input”

This is drastically stretching the truth to the point of breaking it.

Machine learning requires a human to select the learning algorithm (there are many), select the inputs, select the outputs, and define what is a good outcome versus a bad outcome. Most of that will not be correct on the first try and the human will continue to tweak it all until it works. The only part where the computer has some “control” is the trial and error process, by which it observes and tries to learn. But a human had to very narrowly define the problem, the solution space, and the learning process, including what trial and error is acceptable. That’s an awful lot of human input for “no human input.”

So if you’re looking for that magic out-of-the-box self-learning machine to solve your problems, we simply aren’t there yet. However, with a good data scientist and an AI engineer, you can assemble some very good off-the-shelf component AI software and begin working on your science and business challenges using some very powerful tools.

Claim 2: “AlphaGo Zero not only rediscovered the common patterns and openings that humans tend to play…it ultimately discarded them in preference for its own variants which humans don’t even know about or play at the moment.”

This is very likely true, but it’s also very misleading.

The context in which this claim is made is that unsupervised learning is what allowed the machine to find things humans didn’t know. Also, technically true, but misleading, since extrapolating new information is central to all AI techniques. Extrapolation is the very point of AI: supervised, unsupervised, or otherwise. Pointing out AlphaGo found new solutions is great, but of course it did, that’s what AI does.

Claim 3: “Not constrained by humans”

This is entirely false.

As mentioned above, the entire process is constrained by a human. For example, imagine teaching a child to assemble a puzzle. As the adult you would select the puzzle, set aside time to work on the puzzle, define the success and failure criteria, explain all that to the child, sit the child down and present the puzzle. The child has all the information they need to begin a trial and error process on the puzzle, but you are still there to say “yes that’s right” or “no that’s wrong” at every step.

That is hardly “unconstrained.” It is the same for unsupervised learning. We call it unsupervised simply because we don’t present thousands of examples of already solved puzzles, but the learning process is still very constrained.

Claim 4: “This is a trial-and-error process known as ‘reinforcement learning’”

This is not correct the way it is presented — context is everything.

Reinforcement learning is any technique by which data is presented, an output is generated, and the machine is adjusted to perform better next time.

Whether that data is hand-picked by a human (supervised learning) or discovered through trial-and-error by the machine (unsupervised learning), it’s all reinforcement learning in some fashion. Learning by definition is always “reinforcement” or else you wouldn’t know if you got it right.

The paragraph containing this claim opens with, “Starting with just the rules of Go and no instructions…” Again, this is very misleading; rules are instructions. If you give someone (or something) rules, you have by definition also given instructions.

Claim 5: “No longer constrained by the limits of human knowledge”

This again drastically stretches the truth.

All learning algorithms and heuristics are created by humans. As such, the computer is limited by the sophistication of the human-designed machine learning techniques. Unfortunately for the moment, human knowledge is a very real bottleneck for all AI.

This is especially important to remember as you think about employing AI techniques for your business challenges. It’s very unlikely you will be able to use AI in a meaningful way without good data science, engineering, and a deep understanding of AI techniques and your data.

AI solutions simply aren’t packaged and commercialized to that level (not yet). We’re getting there, and there are already small examples of point-and-click AI products. But for any complex application, you are likely to be very constrained by the human knowledge of your domain and the current state of AI.

Claim 6: “Beginning of the end”

Probably not.

Even if this is the beginning of the dawn of thinking machines, there is no reason to assume it means our demise.

Our Fox News post ends with the usual ominous allusion to machines learning all on their own and running amok. This would only be true if, for example, once the computer in question learned to play the game Go, it then used that information to also learn how to fly a plane, make breakfast, and mow the lawn.

Point being, extrapolating to unrelated domains is still well beyond the reach of AI technology.

The foundation of AI currently relies on highly constrained input, output, and rules guiding both the learning process and the decision-making. It is a massive leap to unconstrained machine learning (probably several massive leaps).

All that is not to say AI isn’t pretty amazing — it is. Understanding what it is and is not, how it can and cannot be used, and who you need on your team to make it happen is the first step to using AI practically.

AI can be used with great effectiveness in data science and business for data mining, prediction, pattern matching, and many other applications. However, you will still need human experts trained in all these areas to implement and guide your AI solutions.

Claim 7: “What DeepMind has demonstrated over the past years is that one can make software that can be turned into experts in different domains… but it does not become generally intelligent.”

To the credit of the article, it does end on a more realistic note.

That statement is accurate, but it’s unfortunate that it’s at the end of the article because it very correctly frames a practical discussion of AI. A productive discussion of AI should begin with that statement.

AI promises very exciting insights into data that were never before possible. With the right humans guiding the process, using more and more sophisticated learning techniques, there will be great advances in data science every day and will continue for a long time to come… until our robot overlords come calling.

I, for one, will welcome them.

[ Author’s Edit: January 2024: You can read an update to the state of AI and misperceptions in Part II of this story. ]

Alex Katrompas comes from an extensive software engineering background heavily focused on e-commerce, business intelligence, and artificial intelligence systems. As Vice President of Technology at Granify, Alex leads a team of engineers, data scientists, and QA specialists in both the United States and Canada. He is currently focused on aligning Granify’s engineering, sales, and product as we work towards our ambitious mission.

--

--

Alexander Katrompas, PhD
Granify

Prof. Computer Science, Senior Machine Learning Scientist; specializing in AI, ML, Data Science, software engineering, stoicism, martial arts, Harleys, tequila.