Once Upon a Time: Part 5.2

Objective: AI before neural networks for lay persons

Sandeep Jain
3 min readMar 26, 2018

Before deep learning, there were neural networks.

Before neural networks, there was another invention built in hardware to attempt artificial intelligence.

Continuing with the allegory from Part 5.1, this part explains this invention which serves as a backdrop for neural networks.

The Council of Perceptrons

In the early days, the mountain tribe called Oompa Kicchu, was led by a simplistic group of tribal elders who called themselves The Council of Perceptrons. It was their joint responsibility to classify beastly threats from the surrounding forest, as scouts brought in snapshots.

The Council of Perceptrons (source)

Each member of the council looked for unique and different characteristics in the snapshots. Unfortunately, a wide range of beasts roamed the forest. The elderly council was set in its ways and could not adjust to the subtle characteristics that separated dangerous beasts from harmless ones.

Consider the snapshot below. Could an agent of death be any cuter? The council could not adjust for lots of cuteness and ferocity in the same beast.

Deceptively cute, killing machine (source: a trip to Lake Hudson, Canada)

Sadly, many scouts died from misclassification caused by this inflexibility of the council.

Allegory Unveiled

Perceptrons and Linearity

Before there were neural networks, there were perceptrons.

In any era, the popular media is prone to hype.

In a 1958 press conference organized by the US Navy, Rosenblatt [inventor] made statements about the perceptron that caused a heated controversy among the fledgling AI community; based on Rosenblatt’s statements, The New York Times reported the perceptron to be “the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

Source — Wikipedia

The algorithm is categorized as a “linear classifier”. A linear classifier uses a a rigid, unchanging, straight line as the pattern to sift and classify data.

From Wikipedia: A diagram showing a perceptron updating its linear boundary as more training examples are added.(By Elizabeth Goodspeed — Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=40188333)

In the algorithm, a single set of functions with their intrinsic weights is combined to learn the classifying pattern — a high-dimensional line.

This ‘council’ of functions learns a pattern that is too rigid to handle many use cases due to its linearity and small size.

In neural networks, there are many such sets of such ‘councils’, and their superior power is the non-linearity involved.

At first promising, perceptrons were quickly found to be far more limited, and the field of AI fell into a period of stagnation for decades. Perhaps excessive hype was a factor too. It would take the discovery of non-linear classification using neural networks to emerge out of the Dark Ages of AI.

In the next part, we’ll explore the period between the advent of neural networks and deep learning.

--

--