A perspective on the history of Artificial Intelligence (AI)

Amritendu Roy
4 min readJul 11, 2020

--

Though we primarily concentrate on computer science and mathematics when we tried to understand AI; however, the contribution from philosophy, economics, neuroscience, and psychology is incredible throughout the history of AI.

a high-level timeline of AI
a high-level timeline of AI

An introduction to the timeline:

Artificial Intelligence (AI) history consists of original work and research by not only mathematicians and computer scientists, but studies by psychologists, physicists, and economists have also been much used. The timeline consists of the pre-1950 era of statistical methods to present AlphaZero in 2017 and more.

The most significant push in the development of technology was during the 2nd world war where both the allied forces and their enemies worked hard to develop technology which can help them get superiority over others. This results in a significant availability of funds for research and development to various institutions.

Development and progress:

The timeline started in 1943, work by McCulloch and Pitts on Artificial Neuron gets the recognition of first work on AI. After work done by and McCulloch, Donald Hebb demonstrated rule for modifying connection strings between neurons — this is called Hebbian learning. It is still in use in a different form and influenced the present neural network model. In 1950 Minsky and Edmonds developed the first neural network computer. They used vacuum tubes to build an automatic pilot mechanism from the B24 bomber to simulate 40 neutrons. It’s been amazing how this particular work initially faced skepticism of whether this kind of work in mathematics or not; however, a legend like Von Neumann supported it.

In 1950 Alan Turing introduced the Turing test. The lecture on Computing Machinery and Intelligence was most influential in the steps. The proposal of child program development says simulating a child’s mind instead of developing an adult mind.

John McAfee first proposed the term artificial intelligence. He was one of the most influential figures in the history of AI. The period of 1952 and 1969 actions great enthusiasm in the development and application of AI. One of the main reasons for this enthusiasm lower expectations for computers to perform new tasks. Using the old tools and technologies and when computers can only do arithmetic operations, any new development looked like science fiction to many.

The development of a general problem solver designed to imitate the human problem-solving approach is a great success in this area. Progress in 1959 at IBM by Nathaniel Rochester was to create a program that can prove geometric theorems created a lot of Buzz as this program outperformed many student mathematicians. To overcome the limitation of AI researchers’ tools, John McAfee defined the high-level language Lisp in 1958. In 1963 McCarthy started the AI lab at Stanford. In 1962 Rosenblatt enhanced Hebb’s learning and called his networks Adalines.

From 1966 to 1973, the reduction in funds available to researchers impacted the growth of AI. The primary reason for this was the lack of the outcome and lack of applicability of many solutions. It was evident that scaling up is not merely a matter of hardware but also the algorithm’s computational complexity. This restricts the development of AI in this period.

From 1969 to 1979, the researchers changed their strategy and focused on developing domain-specific AI applications general purpose applications. In 1969 at Stanford researchers developed DENDRAL to infer molecular structure from the information provided by a mass spectrometer. It was the first successful knowledge-intensive system. This also provoked new thoughts in a new methodology for expert systems, which can be applied to other human expertise areas.

Development of Prolog language and logic develops thinking is another path that scientists followed. By 1986 the first commercial expert system R1 saved the company an estimated 40 million a year. This started a new industry of artificial intelligence. By 1988, the sector valued billions of dollars. In 1985 scientists reinvented backpropagation inspired by work done by Bryson and Ho in 1969. Also, the technology development around parallel distributed processing provided much-needed technology advantage. There is a lot of other algorithmic development that pushed the research forward. The adaptation of hidden Markov models and probabilistic reasoning in intelligent systems provided a different perspective in the area of reinforcement learning.

Conclusion:

Though we primarily concentrate on computer science and mathematics when we tried to understand AI; however, the contribution from philosophy, economics, neuroscience, and psychology helped develop systems in the past. The field of operations research, Markov decision process, game theory influenced many advanced technologies. Cognitive psychology, which uses the brain as an information processing device, is the primary tool for all development areas related to cognitive sciences today. Even if we can develop a highly intelligent machine, it is also essential to understand how the machines act in all possible scenarios. Acting rationally is still a subjective approach depending on the data and simulations.

Can a machine reasons beyond its training data and without any algorithmic bias be still a big question?

Can we develop a framework that can consider humanity’s betterment and not itself in an uncertain situation? The context of beneficial machines (by that will allow us to take control in unpredictable conditions and the rationality defined by the fact of maximizing humans expected utility is the solution for a world where humans and machines exist together.

References:

  1. Russell, S. and Norvig, P., 2019. Artificial Intelligence: A Modern Approach. 4th ed.
  2. Goodfellow, I., Bengio, Y. and Courville, A., 2017. Deep Learning.
  3. Science in the News. 2020. The History Of Artificial Intelligence — Science In The News. [online] Available at: <http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/> [Accessed 11 July 2020].
  4. Writer, T., 2020. A Brief History Of Artificial Intelligence. [online] livescience.com. Available at: <https://www.livescience.com/49007-history-of-artificial-intelligence.html> [Accessed 11 July 2020].
  5. Press, G., 2020. A Very Short History Of Artificial Intelligence (AI). [online] Forbes. Available at: <https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/#3c2b3ee6fba2> [Accessed 11 July 2020].
  6. Plato.stanford.edu. 2020. Artificial Intelligence (Stanford Encyclopedia Of Philosophy). [online] Available at: <https://plato.stanford.edu/entries/artificial-intelligence/> [Accessed 11 July 2020].

--

--

Amritendu Roy

More than 13 years of Machine Learning experience with more than ten years in leading Data Science teams across Financial Service and Technology domain.