Simplified AI : Timeline

Siby Charley
5 min readSep 12, 2020

A timeline of important events in AI history so far

Timeline Flow

There is always a constant feeling to understand the evolution of AI in to what we see around us in terms of the techniques/algorithms. So this post is an made with an objective to provide a birds eye view of important events happened in AI history so far in chronological order.

1940’s and 1950's

  • 1943 ->Walter Pitts and Warren McCulloch gave mathematical model for Biological Neuron in the paper “A Logical Calculus of the ideas Immanent in Nervous Activity”.
  • 1950 -> Alan Turing came up with Turing’s test as a culmination of his
    ideas around “Can machines think”.
  • 1952 -> Arthus Samuel @ IBM developed a program for playing
    “checkers”. It was able to identify patterns and learn over a course of time to improve its game. He coined the term “machine learning” as a field of study gives computers the ability without being explicitly programmed.
  • 1956 -> Dartmouth Workshop was held where the term “Artificial intelligence” was used and considered as the pioneering event.
  • 1957 -> American psychologist Frank Rosenblatt in the paper “The
    perceptron : A perceiving and recognizing automation
    ” proposes the
    perceptron capable of doing true binary classification.

1960’s and 1970's

  • 1960 -> BackPropagation(BP) was introduced in the context of control theory by Henry J Keely in his paper “Gradient theory of Optimal Flight Paths” and was later refined by Stuart Drefyus through chain rule and ultimately being applied in neural nets.
  • 1967 -> Cover and Hart highlighted the usage of Nearest Neighbor
    algorithm for travelling sales man problem .
  • 1969 -> XOR Problem associated with Perceptrons were highlighted by Marvin Minsky and Seymour Papert in their book. They claim the
    application of perceptrons are limited to linear use cases.
  • 1973 -> Lighthill report by UK research council amounting to the lack of velocity in AI based research ultimately leading to funding cuts and
    starting the AI winter

1980's

1990's

  • 1991 -> Sepp Hochreiter in his diploma thesis titled “Untersuchungen zu dynamischen neuronalen Netzen” found that Deeper networks causes Vanishing/Exploding Gradients making the training impractical. This
    along with the slow nature of training contributed to the loss of interest in
    the field largely terming it as 2nd AI Winter.
  • 1995 -> SVM entered into the main stream building up on on the earlier work of Vapnik and was formally explained through the work “Support Vector Networks” co-authored by Cortes and Corinna.
  • 1997 -> Adaboost was proposed as a ensembling technique of weak learners by Yoav Freund and Robert Schapire in their paper “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting”.
  • 1997 -> Deep Blue build by IBM managed to beat the reigning chess
    champion Kasprov 2–1 on a 6 game match with 3 games being drawn. The technology in Deep Blue was less of currently popular ML and more of
    efficient searching and evaluation of potential moves.
  • 1997 -> LSTM(Long Short Term Memory) was proposed by Sepp Hochreiter and Jurgen Schmidhuber as a technique to overcome the
    decaying nature of the updated weights associated with temporal networks like RNN’s.

2000's

  • 2001 -> Random Forests proposed by Leo Breiman combines the idea of ensemble bagging of decision trees with random selection of features creating a collection of decision trees with controlled variation.
  • 2006 -> Geoffrey Hinton, Simon Osindero and Yee-Whye Teh proposed Deep Belief Networks(DBN) through their paper “A fast learning algorithm for deep belief nets” and this is nothing but a stacked implementation of RBM with a greedy training algorithm.
  • 2008 -> As a way to speed up training process GPU’s started getting adopted instead of the CPU’s. This proved to be a crucial step in the wider adoption of machine learning in the following years.
  • 2009 -> ImageNet Challenge was launched by Stanford Professor Fei Fei Lee as means to accelerate the application of AI in Image analysis. The challenge commenced with a dataset of 14 million annotated images.

2010's

  • 2011 -> The vanishing gradient problem has long limited the AI community by acting as blocker towards complex networks. Glorot, Berdes and Bengio proposed the usage of Rectified Linear units(ReLU) as activation functions for eliminating vanishing gradients while training deeper architectures in their work “Deep Sparse Rectifier Neural Networks”.
  • 2012 -> Alex Net was proposed as part of ImageNet Classification Challenge and in many ways a crucial network in terms of its deeper architecture, usage of ReLU and dropouts and faster training recording a accuracy gain of 9% over the previous best giving an absolute value of 84%.
  • 2014 -> General Adversarial Networks(GAN) by Ian Goodfellow was a fresh take on generative techniques due to its ability to create a semi supervised classifier along with a generator.
  • 2016 -> Go is a board game originated in China and is known for its complexity. Alpha Go, a computer program made by DeepMind manages to beat the Human champion and it uses a advanced searching along with policy-value scheme based reinforcement learning.
  • 2017 -> Transformers proposed by Vaswani in his paper “Attention Is All You Need” proves to be one of the major breakthroughs in Natural Language Processing(NLP). The proposed usage of multi headed self attention allows to get an aligned representation of a sequence.

--

--