History of Neural Networks

Martyna Panek
4 min readMar 5, 2023

What is A Neural Network?

A neural network is a sequence of algorithms that attempt to recognize relationships in a data set through processes that imitate those that are made by the human brain.

Neural networks, as well as our body, can adapt to changing input data, thanks to this ability they generate better results without the need to change their criteria.

An overview of the history of artificial neurons is provided below.

History of Neural Networks

1940s — Early in the 1940s, Donald Olding Hebb formed the Hebbian learning hypothesis, which was based on the neural plasticity mechanism. It became well known as Hebbian theory, meaning a neuropsychology theory that contends that repeated and persistent stimulation of a postsynaptic cell by a presynaptic cell results in an increase in synaptic efficacy.

1943 — An article on the potential function of neurons was written in 1943 by mathematician Walter Pitts and neurophysiologist Warren McCulloch. Based on threshold logic methods, they built a simple neural network that simulated the operation of neurons in the human brain.

1949 — In 1949, Donald Olding Hebb wrote a paper called “The Organization of Behaviour”, in which he described his hypothesis of neural flexibility and that neural pathways are reinforced each time they are used, a common concept in human learning. This idea became known as the Hebbian theory and is still widely used in the development of artificial neural networks.

1954 — In 1954, computing machines were used for the first time to simulate Hebb networks. This was utilised by academic teacher Belmont G. Farley and physicist Wesley A. Clark. The simulation of Hebb networks on computing machines was a significant milestone in the field of artificial intelligence and neural network research. It allowed researchers to experiment with and study the behavior of neural networks more comprehensively.

1956 — In addition to the computing machines created in 1954 by Belmont G. Farley and Wesley A. Clark, other neural network computing machines were created in 1956 by Nathaniel Rochester, John Henry Holland, Lois Haibt and William L. Duda. These machines, called the Logic Theorist and the General Problem Solver, were designed to simulate human problem-solving abilities and were among the first AI systems to be developed.

1958 — The perceptron was created by Frank Rosenblatt in 1958. Perceptron is the simplest neural network consisting of a slightly modified binary neuron. His algorithm learned pattern recognition based on a two-layer learning computer network using addition and subtraction. Rosenblatt also described circuits in the non-basic perceptron, such as the exclusion circuit, a circuit whose mathematical calculations could not be processed until Paul Werbos in 1975 created the back-propagation algorithm, but more on this in the next part of the article.

1959 — This year, Bernard Widrow and Marcian Hoff from Stanford University created two models called Aladline and Madaline, where ADALINE (Adaptive Linear Neuron) is a single-layer network that uses a linear activation function. It was created to recognize binary patterns so that the next bit streamed from the telephone line could be predicted. MADALINE (Multiple ADALINE), on the other hand, is a multilayer neural network that consists of multiple ADALINE units connected in parallel. It is used for solving classification problems and it was the first neural network used for a real problem to eliminate echoes in telephone lines using an adaptive filter. Interestingly, although this model is now old, it is still used commercially.

1962 — A learning rule also known as the Widrow-Hoff or Delta learning rule examines the value before the weight adjusts it was created in 1962 by Bernard Widrow and Marcian Hoff. This rule calculates the difference between the output of a neural network and the desired output, and uses the difference to adjust the weights of the network to reduce the error. This rule is still widely used in modern neural network training algorithms.

1969 — The book “Perceptrons” by Marvin Minsky and Seymour Papert was published, which argues that Rosenblatt’s single-perception approach cannot be correctly translated to multilayer neural networks, meaning that evaluating the correct stratified values based on the final result would require several or even infinite iterations. The book pointed out the limitations of single-layer perceptrons in solving complex problems and concluded that multi-layer perceptrons, which were known to be more powerful, were difficult to train. As a result, many researchers lost interest in neural networks and turned to other machine learning approaches.

Marvin Minsky and Seymour Papert’s publication “Perceptrons: An Introduction to Computational Geometry” in 1969 is often cited as the main reason why research on neural networks stagnated for almost two decades.

However, in the 1980s, research on neural networks resumed as new training algorithms were developed and computers became more powerful. In the second part of this article, we will explore the developments that have occurred since the resurgence of neural network research in the 1980s.

References

  1. https://en.wikipedia.org/wiki/Donald_O._Hebb
  2. McCulloch, Warren; Walter Pitts (1943). “A Logical Calculus of Ideas Immanent in Nervous Activity”. Bulletin of Mathematical Biophysics. 5 (4): 115–133. doi:10.1007/BF02478259
  3. Hebb, D. O. (1949). The Organization of Behavior: A Neuropsychological Theory. New York: Wiley and Sons. ISBN 9780471367277.
  4. Farley, B.G.; W.A. Clark (1954). “Simulation of Self-Organizing Systems by Digital Computer”. IRE Transactions on Information Theory. 4 (4): 76–84. doi:10.1109/TIT.1954.1057468
  5. Rochester, N.; J.H. Holland; L.H. Habit; W.L. Duda (1956). “Tests on a cell assembly theory of the action of the brain, using a large digital computer”. IRE Transactions on Information Theory. 2 (3): 80–93. doi:10.1109/TIT.1956.1056810.
  6. Rosenblatt, F. (1958). “The Perceptron: A Probabilistic Model For Information Storage And Organization In The Brain”. Psychological Review. 65 (6): 386–408. CiteSeerX 10.1.1.588.3775. doi:10.1037/h0042519. PMID 13602029.
  7. Minsky, Marvin; Papert, Seymour (1969). Perceptrons: An Introduction to Computational Geometry. MIT Press. ISBN 978–0–262–63022–1.
  8. https://www.nytimes.com/1958/07/08/archives/new-navy-device-learns-by-doing-psychologist-shows-embryo-of.html?url=http%3A%2F%2Ftimesmachine.nytimes.com%2Ftimesmachine%2F1958%2F07%2F08%2F83417341.html
  9. https://youtu.be/aygSMgK3BEM
  10. https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/History/history1.html

--

--