AI: Is it really something new? A brief chronology
The short answer is no.
Although we were dazzled by OpenAI’s early demo of ChatGPT on November 30, 2022, it’s worth noting that the first related idea was proposed more than 73 years ago.
On October 1, 1950, Oxford’s Mind Journal published an article called “Computing Machinery and Intelligence” in its Volume LIX, Issue 236. The author was Alan Mathison Turing (1912–1954).
Yes! The same mathematician, computer scientist, and codebreaker interpreted by Benedict Cumberbatch in the movie The Imitation Game.
His article begins with the words:
I PROPOSE to consider the question, “Can machines think?”
And ends with the conclusion:
We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.
Somehow, Turing was right.
I will not discuss whether machines can think or not, but Turing’s proposal is a fact.
AI not only understands and speaks English; for instance, ChatGPT is a multilingual chatbot. Currently, it supports more than 50 languages. And if you believe that ChatGPT was silenced, then realize that ChatGPT and Whisper models are now available on the OpenAI API, giving developers access to cutting-edge language (not just chat!) and speech-to-text capabilities.
In addition, machines are not only capable of playing chess:
AlphaGo is the first computer program to defeat a professional human Go player, the first to defeat a Go world champion, and is arguably the strongest Go player in history.
And if you believe that Go is easier than Chess, realize that at the opening move in Chess, there are 20 possible moves. In Go, the first player has 361 possible moves.
When did this come true? In 2022, with OpenAI?
Artificial Intelligence (AI) has been evolving gradually, albeit not always with the same public access. The journey has certainly been a long one.
Let’s explore some milestones to see the evolution:
1943
- Walter Pitts and Warren McCulloch created a computer model based on the neural networks of the human brain. This is considered the first theoretical step towards Deep Learning (DL).
1949
- Donald Hebb, in his book titled “The Organization of Behavior”, defines theories on neuron excitement and communication between neurons. This model of brain cell interaction forms the basis for the first theoretical steps of Machine Learning (ML).
1950s
- Arthur Samuel of IBM developed a computer program for playing checkers. Due to the program’s limited computer memory, Samuel initiated what is known as alpha-beta pruning. His design included a scoring function that utilized the positions of the pieces on the board to measure each side’s chances of winning. The program selected its next move using a minimax strategy, which eventually evolved into the minimax algorithm.
- Frank Rosenblatt, at the Cornell Aeronautical Laboratory, combined Donald Hebb’s model of brain cell interaction with Arthur Samuel’s machine learning efforts to create the Perceptron (1957).
1960s
- Henry J. Kelley, Stuart Dreyfus, Alexey Grigoryevich Ivakhnenko, and Valentin Grigorʹevich Lapa created mathematical models used in Deep Learning.
- Natural Language Processing (NLP) started as a way to use computers as translators between Russian and English. Cold War tools? Maybe!
- Feed-forward Neural Networks, Backpropagation, Deep Neural Networks, and Artificial Neural Networks (ANN) have evolved.
- In 1967, the Nearest Neighbor Algorithm was conceived, marking the beginning of basic pattern recognition. Originally used for mapping routes, it was one of the earliest algorithms applied to finding the most efficient route for the traveling salesperson problem.
1970s ❄️
- The first “AI winter” arrived, with funding cuts limiting both AI and DL research.
1979
- Kunihiko Fukushima developed an artificial neural network, called Neocognitron, that used a hierarchical, multilayered design. This enabled the computer to “learn” to recognize visual patterns.
1980s ❄️
- The second “AI winter” (1985–1993) arrived, which also affected research for Neural Networks and Deep Learning.
Various overly-optimistic individuals had exaggerated the “immediate” potential of Artificial Intelligence, breaking expectations and angering investors. The anger was so intense, the phrase Artificial Intelligence reached pseudoscience status. Fortunately, some people continued to work on AI and DL, and some significant advances were made. (A Brief History of Deep Learning — DATAVERSITY, 2022)
- Yann LeCun provided the first practical demonstration of backpropagation at Bell Labs. He combined convolutional neural networks with backpropagation to recognize “handwritten” digits. This system was eventually used to read the numbers on handwritten checks.
- Natural Language Processing (NLP) experienced a leap in evolution thanks to both a steady increase in computational power and the use of new machine learning algorithms.
Early 1990s
- Artificial intelligence research shifted its focus to something called Intelligent Agents. These agents can be used for news retrieval services, online shopping, and web browsing. Sometimes, they are referred to as agents or bots. With the use of Big Data programs, they have gradually evolved into digital virtual assistants and chatbots.
- In a 1990 paper titled “The Strength of Weak Learnability”, Robert Schapire introduced the concept of boosting. Boosting can be defined as a set of algorithms whose primary function is to convert weak learners to strong learners.
1995
- Dana Cortes and Vladimir Vapnik developed the support vector machine, a system for mapping and recognizing similar data.
1997
- Sepp Hochreiter and Jürgen Schmidhuber developed the LSTM (long short-term memory) for recurrent neural networks. The LSTM technique supports learning tasks that use memories of thousands of small steps, which is important for learning speech.
- Deep Blue defeats Garry Kasparov in a chess match under tournament regulations, becoming the first computer program to defeat a world champion.
2006
- The National Institute of Standards and Technology (NIST) sponsored the “Face Recognition Grand Challenge” and tested popular facial recognition algorithms.
2007
- Long Short-Term Memory (LSTM) Networks began surpassing more established speech recognition programs.
2009
- Fei-Fei Li, an AI professor at Stanford, launched ImageNet, assembling a free database of more than 14 million labeled images. Labeled images were needed to “train” neural nets.
2011
- Siri, of Apple, developed a reputation as one of the most popular and successful digital virtual assistants supporting natural language processing.
2012
- A machine learning algorithm developed by Google’s X Lab could sort through and find videos containing cats.
The Cat Experiment used a neural net spread over 1,000 computers. Ten million “unlabeled” images were taken randomly from YouTube, shown to the system, and then the training software was allowed to run. At the end of the training, one neuron in the highest layer was found to respond strongly to the images of cats. (A Brief History of Deep Learning — DATAVERSITY, 2022)
2014
- The DeepFace algorithm was developed by Facebook. It recognized people in photographs with the same accuracy as humans.
- The Generative Adversarial Neural Network (GAN) was introduced. With GAN, two neural networks play against each other in a game. The goal is for one network to imitate a photo and trick its opponent into believing it is real. The opponent is looking for flaws. The game is played until the near-perfect photo tricks the opponent.
2015
- Google’s speech recognition program reported a 49 percent increase in performance by using an LSTM that was CTC-trained (Connectionist Temporal Classification).
2015 … 2023
- Generative Models began to gain ground over the Discriminative, and their use became common.
- Many other significant events have occurred, bringing key players in the industry to the forefront.
There are several organizations that focus on AI research, such as:
- IBM AI (prior to 2010)
- Google DeepMind (founded in 2010 and acquired by Google in 2014)
- OpenAI (founded in 2015)
- Meta AI (founded in 2015)
- Google AI (founded in 2017)
- DeepLearning.AI (founded in 2017)
- Azure AI (set to be launched in 2023)
As we can see, AI is not a recent invention. Its development took many years of study and experimentation, with many brilliant minds contributing to its progress. Today, high technology has made AI available to us. It is no longer a military secret or advanced science; it is now common knowledge and a daily assistant.
However, the question that was asked over 70 years ago still remains unanswered: Can machines think?
References
A. M. TURING, I. — COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236, October 1950, Pages 433–460, https://doi.org/10.1093/mind/LIX.236.433
Introducing ChatGPT and Whisper APIs. (2023, March 1). OpenAI. https://openai.com/blog/introducing-chatgpt-and-whisper-apis
AlphaGo. (n.d.). Google DeepMind. https://www.deepmind.com/research/highlighted-research/alphago
A Brief History of Machine Learning — DATAVERSITY. (2021, December 3). DATAVERSITY. https://www.dataversity.net/a-brief-history-of-machine-learning/
A Brief History of Deep Learning — DATAVERSITY. (2022, February 4). DATAVERSITY. https://www.dataversity.net/brief-history-deep-learning/
Do you identify as Latinx and are working in artificial intelligence or know someone who is Latinx and is working in artificial intelligence?
- Get listed on our directory and become a member of our member’s forum: https://forum.latinxinai.org/
- Become a writer for the LatinX in AI Publication by emailing us at publication@latinxinai.org
- Learn more on our website: http://www.latinxinai.org/
Don’t forget to hit the 👏 below to help support our community — it means a lot!