A Brief History of Significant AI Achievements in the Last Century

Aneeqa Mobashir
3 min readNov 24, 2023

--

Frank Rosenblatt of Cornell Unversity works on the “perceptron” — what he described as the first machine “capable of having an original idea.”
Frank Rosenblatt of Cornell University pictured with the “perceptron” — what he described as the first machine “capable of having an original idea.”

This timeline provides a glimpse into the major milestones that have shaped the field of artificial intelligence over the past century.

1936 — Turing Machine:

  • Alan Turing introduces the concept of a theoretical computing machine, laying the foundation for the theory of computation.

1950 — Turing Test:

  • Alan Turing proposes the Turing Test as a measure of a machine’s ability to exhibit human-like intelligence.

1956 — Dartmouth Conference:

  • The term “Artificial Intelligence” is coined at the Dartmouth Conference, marking the birth of AI as a field of study.

1959 — Perceptrons:

  • Frank Rosenblatt of Cornell University develops the perceptron, an early form of a neural network.

1966 — ELIZA:

  • Joseph Weizenbaum of MIT creates ELIZA, an early natural language processing program that simulates conversation. ELIZA was one of the first chatbots, considered a “Rogerian psychotherapist,” with Weizenbaum’s original impetus focused on capturing and demonstrating shallow communications between machines and humans.

1974 to 1980 — First AI Winter:

  • Funding and interest in AI research decline due to unmet expectations and technical challenges, marking the start of the first “AI winter.”

1980s — Expert Systems:

  • AI research focuses on expert systems, rule-based programs designed to emulate human expertise in specific domains. Edward Feigenbaum is known as “the father of expert systems”. He is also the founder of the Knowledge Systems Laboratory at Stanford University.

1986 — Backpropagation Algorithm:

  • The backpropagation algorithm, originally discovered by Werbos in 1974 is rediscovered and popularized. The backpropagation algorithm is a key development in training artificial neural networks.

Late 1980s and early 1990s — Second AI Winter:

  • Similar to the first AI winter in the 1970s, the second AI winter was marked by unmet expectations and challenges in developing practical applications.

1997 — Deep Blue vs. Kasparov:

  • IBM’s Deep Blue defeats world chess champion Garry Kasparov, showcasing the potential of AI in strategic decision-making.

2000s — Rise of Machine Learning:

  • Machine learning gains prominence, with advancements in algorithms and the availability of large datasets.

2011 — IBM’s Watson Wins Jeopardy!:

  • IBM’s Watson demonstrates its natural language processing capabilities by winning the game show Jeopardy!.

2012 — ImageNet Competition:

  • AlexNet developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton wins the ImageNet Large Scale Visual Recognition Challenge established by Fei-Fei Li, demonstrating the power of deep learning in image recognition.

2014 — Google acquires DeepMind:

  • DeepMind founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman in 2010 with an interdisciplinary approach to building general AI systems is acquired by Google.

2016 — AlphaGo vs. Lee Sedol:

  • DeepMind gains widespread recognition for the development of AlphaGo. AlphaGo defeats Lee Sedol, a world champion Go player.

2017 — GANs and Deepfake Technology:

  • Generative Adversarial Networks (GANs) gain wider recognition, leading to advancements in realistic image, speech, and video generation.

2018 — BERT and Transformer Models:

  • BERT (Bidirectional Encoder Representations from Transformers) revolutionizes natural language processing, and transformer models become widely adopted. BERT is like a smart language understanding tool. It reads words bidirectionally, considering both left and right context. It learns by predicting missing words in a sentence. This makes it effective for tasks like answering questions.

2020 — OpenAI’s GPT-3:

  • OpenAI releases GPT-3, a highly advanced language model with 175 billion parameters, showcasing the capabilities of large-scale deep learning.

2021 — Advances in Quantum Machine Learning:

  • Quantum machine learning gains attention, exploring the potential of quantum computing in enhancing AI algorithms. Quantum machine learning is like using super-fast computers that use quantum rules to solve complex problems. It helps with things such as solving puzzles faster than regular computers.

2022 — Advances in AI Hardware Accelerate:

  • Continued progress in specialized hardware for AI, such as graphics processing units (GPUs) and tensor processing units (TPUs), contributing to the efficiency of AI model training and deployment.

2023— AI and GenAI gain popularity:

  • AI and generative AI (GenAI) gain popularity, attracting investments from venture capital firms and established companies. The growing interest reflects AI’s transformative potential across various industries.

--

--

Aneeqa Mobashir
Aneeqa Mobashir

Written by Aneeqa Mobashir

0 Followers

I like to read and write about finance, technology, and self-improvement.

No responses yet