History of Neural Network

Karthikeyan A
4 min readSep 1, 2018

--

Biological Neuron -Reticular Theory (1871–1873)
Joseph von Gerlach proposed that the nervous system is a single continuous network as opposed to a network of many discrete cells .

Joseph von Gerlach

Neuron Doctrine(1888–1891)
Santiago Ram ́on y Cajal used Golgi’s technique to study the nervous system and proposed that it is actually made up of discrete individual cells formimg a network(as opposed to a single continuous network)

Neuron Doctrine

The term neuron (spelled neurone in British English) was itself coined by Waldeyer-Hartz as a way of identifying the cells in question around 1891.

Waldeyer-Hartz

The Final Word
In 1950s electron microscopy finally confirmed the neuron doctrine by unambiguously demonstrated that nerve cells were individual cells interconnected through synapses (a network of many individual neurons).

Era of Artificial Neural Network.

McCulloch Pitts Neuron(1943)
McCulloch (neuroscientist) and Pitts (logician) proposed a highly simplified model of the neuron.

mcculloch pitts neuron model

Perceptrons (1958)

Perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt.

Percetron

Preceptrons network introduced Weight and Bias in ANN . These discovery leads to the stepping stone for upcoming advancement in AI Research.

Multilayer Perceptrons

MLP introduced by Ivakhnenko in 1965. This is perceptrons with hidden layer.

Winter of Neural Network (1969–1986)

Famous book “Perceptrons” by Minsky and Papert outlined the limits of
what perceptrons could do which leads to abandonment of connectionist AI.

Backpropagation (1986) -Rebirth of Neural Network

Basics of continuous backpropagation were derived in the context of control theory by Henry J. Kelley in 1960.

Arthur E. Bryson in 1961 used principles of dynamic programming.

In 1970 Linnainmaa published the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions (Base for current Backpropagation algorithm).

In 1986 Rumelhart, Hinton and Williams showed experimentally that this method can generate useful internal representations of incoming data in hidden layers of neural networks.

Universal Approximation Theorem(1989)

The theorem states that simple neural networks can represent a wide variety of interesting functions when given appropriate parameters; however, it does not touch upon the algorithmic learnability of those parameters.

First versions of the UAT was proved by George Cybenko in 1989 for sigmoid activation functions.

Convolutional Neural Network(1989)
Handwriting digit recognition using backpropagation over a Convolutional NeuralNetwork (LeCun et. al.)

CNN design follows vision processing in living organisms

Work by Hubel and Wiesel in the 1950s and 1960s showed that cat and monkey visual cortexes contain neurons that individually respond to small regions of the visual field. Provided the eyes are not moving, the region of visual space within which visual stimuli affect the firing of a single neuron is known as its receptive field.

DEEPER AND DEEPER

New ERA of Neural Network

2006

Unsupervised Pre-Training(2006)

An autoencoder is a type of ANN used to learn efficient data codings in an unsupervised manner.

The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction.

Hinton and Salakhutdinov described an effective way of initializing the weights that allows deep autoencoder networks to learn a low-dimensional representation of data.

New record on MNIST(2011)
Ciresan et. al. set a new record on the MNIST dataset using good old back-
propagation on GPUs (GPUs enter the scene)

Winning more visual recognition challenges(2012–2016)

--

--

Karthikeyan A

Learning the black magic of AI. Dean of School of AI.