The general discussion surrounding AI is embarrassing, and we need to do something about it

Max Pagels
The Hands-on Advisors
4 min readFeb 9, 2018

Artificial intelligence isn’t a new thing. The first organised attempt at AI research was cemented way back in 1956, when the Dartmouth Summer Research Project on Artificial Intelligence was held. The workshop proposal, outlined by McCarthy, Minsky et al, was as follows:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

The basic idea was simple: can we make machines exhibit some form of intelligence? Can we simulate intelligence in some way?

Photo from the first Dartmouth conference. Image credit: Joseph Mehling

Back then, and indeed still today, there are many different subfields in artificial intelligence research. Some use hand-written logic to exhibit intelligence. An increasing amount use machine learning, which — although largely dismissed in the 70s and 80s, primarily due to lack of computing power — has proven to be great at tackling many problems. Some use something else. But for some reason, especially in the media, AI has come to mean machine learning and, even more worryingly, deep neural networks or deep learning. It’s time to disspell some of these notions.

AI is not just machine learning

The core idea of machine learning is to have learning algorithms sift through data and learn patterns without us meddling with the learning process itself. Give a machine a bunch of examples of cat and and dog photos, and you can use standard learning algorithms to build a classifier that can tell the difference between a cat and a dog in a photo it has never seen before. Feed it English sentences and Finnish equivalents, and it will learn to translate. But it’s not some sort of magic wand. Machine learning is more popular than ever before, but AI is more than just machine learning.

AlphaGo Zero, the superhuman Go program? It uses neural networks, but a bunch of other techniques, too (Monte Carlo tree search being one).

Boston Dynamics’ robots? They use largely proprietary tech, but it’s speculated that machine learning isn’t used at all.

Robot Process Automation? Explicitly rule-based, typically no machine learning there.

Autonomous vehicles? Machine learning is only part of the equation. For a car to drive itself, you need to employ several techniques, including symbolic AI, pathfinding, and hand-written logic. The notion that machine learning is all it takes is laughable.

The list goes on, but the key takeaway is this: AI is a broad term, and we should stop treating it as a single piece of technology.

AI is not deep learning

Deep learning refers to neural networks with several hidden layers. A neural network itself? Well, it’s one learning algorithm amongst many. Neural nets have achieved great success, particularly in computer vision, because they are able to learn very complex relationships between input and output. But they aren’t close to being the only option out there. Other learning algorithms include linear & logistic regression, SVMs, decision trees, random forests, matrix factorisation, k-means, k-nn…the list is long, and it’s only getting longer.

Learning algorithms other than deep neural networks are still popular. So much so, in fact, that when Kaggle asked over 16,000 respondents what learning algorithms they use at work, logistic regression took the #1 spot in all industries except the military industry. Neural networks? #4.

Neural networks do not learn like our brains do

Neural networks, be they shallow or deep, have huge limitations. They can’t reason like humans do, they are terrible at applying their knowledge to new areas, and their inner workings are hard to interpret. And I say that as someone who knows mores about neural nets than I do other learning algorithms. I’m fascinated by them.

The term “neural network”, much like other terminology in artificial intelligence, is unfortunate. Neural networks aren’t neural, nor are they networks. At best, they are an absolutely horrible, ridiculously simplistic approximation of the human brain. At worst, they bear no resemblance at all. Any researcher worth their salt will tell you the same.

The truth is that we, as a people, don’t really know how the human brain works. If we did, someone would have emulated it in a computer by now. Neural networks and deep learning doesn’t learn like a human. Humans don’t need thousands of examples of a cat to know it’s a cat. Humans can deduce that a cat is a feline, but a neural network can’t. Neural nets, and by extension, deep learning, have a long way to go.

Who cares? Why not just use “AI” to mean what everyone thinks it means?

AI is riding a massive hype wave right now, because the advancements of late have been so great. There are many aspects in life for which the AI toolbox we have today has massive potential. But if we, as AI practitioners, fail to set the record straight, or worse yet, overhype AI ourselves, all we are doing is setting ourselves up for massive disappointment down the line. The field has seen terrible AI Winters before, and we can’t afford to have a new one.

--

--