Parvez Kose
DeepViz
Published in
2 min readApr 12, 2022

--

Deep Learning Timeline — Explainable AI Visualization (Part 3)

This article continues the background overview of the research ‘Explainable Deep Learning and Visual Interpretability.’

. . . . . . . . . . . . . . . . . . . . . . . . . Timeline . . . . . . . . . . . . . . . . . . . . . . . . . . .

1943 — McCulloch and Pitts proposed the McCulloch-Pitts neuron model.

1949 — Hebb postulated the first rule for self-organized learning.

1956 — The Dartmouth Artificial Intelligence Summer Research Project, a seminal AI event that motivated a generation of scientists to explore the potential of computing to match human capabilities.

1962 — Frank Rosenblatt introduced the simple single-layer neural network, Perceptron — the precursor of today’s learning algorithms for deep neural networks.

1962 — David Hubel and Torsten Wiesel published their work on Receptive Fields and Functional Architecture in the Cat’s Visual Cortex,” which revealed the pattern of brain cells organization that processes vision.

1969 — Marvin Minsky and Seymour Papert published Perceptrons that demonstrated the limitations of a single layer perceptron and marked the beginning of an AI winter.

1979 — Geoffrey Hinton and James Anderson organized the Parallel Models of Associative Memory workshop, attended by a new generation of neural network pioneers.

1986 — Yann LeCun and Geoffrey Hinton perfect backpropagation to train neural networks that pass data through successive layers allowing them to learn more complex skills.

1987 — NeurIPS — The First Neural Information Processing Systems (NeurIPS) Conference was held at the Denver Tech Center.

1990 — At Bell Labs, LeCun uses backpropagation to train a network that recognizes handwritten digits. AT\&T later employs it in machines that can read checks.

2012 — AlexNet, a neural network model by Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton, won ILSVRC (ImageNet Large Scale Visual Recognition Challenge) image classification competition. It marked an essential breakthrough in deep learning.

2019 — Geoffrey Hinton, Yann LeCun and Yoshua Bengio, the fathers of deep learning, received the 2018 ACM Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing today.

The next article in this series covers the effects of modern data upsurge, advanced algorithms and computing power that fueled the advent of the deep learning era:

https://medium.com/deepviz/explainable-ai-and-visual-interpretability-background-part-4-a51e12c13a2c

--

--

Parvez Kose
DeepViz

Staff Software Engineer | Data Visualization | Front-End Engineering | User Experience