Brief History of Neural Networks

Kate Strachnyi
Analytics Vidhya
Published in
3 min readJan 23, 2019

Although the study of the human brain is thousands of years old. The first step towards neural networks took place in 1943, when Warren McCulloch, a neurophysiologist, and a young mathematician, Walter Pitts, wrote a paper on how neurons might work. They modeled a simple neural network with electrical circuits.

In 1949, Donald Hebb reinforced the concept of neurons in his book, The Organization of Behavior. It pointed out that neural pathways are strengthened each time they are used.

In the 1950s, Nathanial Rochester from the IBM research laboratories led the first effort to simulate a neural network.

In 1956 the Dartmouth Summer Research Project on Artificial Intelligence provided a boost to both artificial intelligence and neural networks. This stimulated research in AI and in the much lower level neural processing part of the brain.

In 1957, John von Neumann suggested imitating simple neuron functions by using telegraph relays or vacuum tubes.

In 1958, Frank Rosenblatt, a neuro-biologist of Cornell, began work on the Perceptron. He was intrigued with the operation of the eye of a fly. Much of the processing which tells a fly to flee is done in its eye. The Perceptron, which resulted from this research, was built in hardware and is the oldest neural network still in use today. A single-layer perceptron was found to be useful in classifying a continuous-valued set of inputs into one of two classes. The perceptron computes a weighted sum of the inputs, subtracts a threshold, and passes one of two possible values out as the result.

In 1959, Bernard Widrow and Marcian Hoff of Stanford developed models they called ADALINE and MADALINE. These models were named for their use of Multiple ADAptive LINear Elements. MADALINE was the first neural network to be applied to a real-world problem. It is an adaptive filter which eliminates echoes on phone lines. This neural network is still in commercial use.

Marvin Minsky & Seymour Papert proved the Perceptron to be limited in their book, Perceptrons

Progress on neural network research halted due fear, unfulfilled claims, etc. until 1981. This caused respected voices to critique the neural network research. The result was to halt much of the funding. This period of stunted growth lasted through 1981.

1982 — John Hopfield presented a paper to the national Academy of Sciences. His approach to create useful devices; he was likeable, articulate, and charismatic.

1982- US-Japan Joint Conference on Cooperative/ Competitive Neural Networks at which Japan announced their Fifth-Generation effort resulted US worrying about being left behind. Soon funding was flowing once again.

1985 — American Institute of Physics began what has become an annual meeting — Neural Networks for Computing. By 1987, the Institute of Electrical and Electronic Engineer’s (IEEE) first International Conference on Neural Networks drew more than 1,800 attendees.

In 1997 — A recurrent neural network framework, Long Short-Term Memory (LSTM) was proposed by Schmidhuber & Hochreiter.

In 1998, Yann LeCun published Gradient-Based Learning Applied to Document Recognition.

Several other steps have been taken to get us to where we are now; today, neural networks discussions are prevalent; the future is here! Currently most neural network development is simply proving that the principal works. This research is developing neural networks that, due to processing limitations, take weeks to learn.

--

--

Kate Strachnyi
Analytics Vidhya

Founder of DATAcated | Author | Ultra-Runner | Mom of 2