History of Artificial Intelligence

Clara Bartels
4 min readSep 12, 2019

--

Think about image recognition in iPhones, smart speakers like Alexa, and self-driving Tesla’s, these are all forms of artificial intelligence. Artificial intelligence is defined as “a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.” Artificial intelligence first came up in the 1940s as an academic discipline, but today it has entered the business world and is in our daily lives more than we know.

American Science Fiction writer Isaac Asimov wrote his short story Runaround in 1942. It was about a robot developed by the engineers Gregory Powell and Mike Donavan — evolves around the Three Laws of Robotics: law one is a robot may not injure a human being or allow a human being to come to harm, law two is a robot must obey the orders given to it by human beings except where such orders would conflict with the first law, and law three is a robot must protect its existence as long as such protection does not conflict with the first or second law. Asimov’s work inspired scientists to dive into the field of robotics, artificial intelligence, and computer science. Over 3,000 miles away, the English mathematician Alan Turing worked on much less fictional issues and developed a code-breaking machine, The Bombe, for the British government. The Bombe’s purpose was to decipher the Enigma code used by the German army in World War II. The powerful way it was able to break the Enigma code make Turing wonder about the intelligence of such machines. In 1950, he published an article where he described how to create intelligent machines and in particular how to test their intelligence. The Turing Test is still considered today as a benchmark to identify the intelligence of an artificial system: if a human can not tell the difference between the machine and another human, it is classified as intelligent.

The word Artificial Intelligence was official six years later when Marvin Minsky and John McCarthy hosted the approximately eight-week-long Dartmouth Summer Research Project on Artificial Intelligence (DARPA). This workshop reunited those who would later become the “founding fathers” of artificial intelligence. The objective of this research project was to reunite researchers from various fields to create a new research area with the goal of building machines able to simulate human intelligence.

ELIZA computer program was a natural language processing tool able to simulate a conversation with a human and one of the first programs capable of trying to pass the Turing Test. ELIZA was created between 1964 and 1966 by Joseph Weizenbaum at MIT. In 1973, the U.S. Congress started to strongly criticize the high spending on this research. In the same year, a British mathematician named James Lighthill published a report where he questioned the optimistic outlook given by some researchers. Lighthill said that machines would only ever reach the level of an “experienced amateur” in games and that common-sense reasoning would always be beyond their abilities. After this, British (with the exception of three universities: Edinburgh, Sussex, and Essex) and U.S. government ended all support to artificial intelligence research. While the Japanese government began to heavily fund artificial intelligence research, the U.S. DARPA responded by a funding increase as well, no further advances were made in the following years

One of the main reasons for lack of support and progress in the field of artificial intelligence was because they (such as ELIZA and the General Problem Solver) tried to replicate human intelligence. They were all Expert Systems, a collection of rules which assume that human intelligence can be formalized as “if-then” statements. They perform poorly in areas that require more information than basic “if-then” statements such as facial recognition. Expert Systems do not have the right system for interpreting external data, learning from this data, and to use those to achieve certain goals through flexible adaptation. Since Expert Systems do not have these characteristics, it can not classify as real, true artificial intelligence because of the lack behind these basic mechanics artificial intelligence needs.

Artificial neural networks are the theory of learning that replicates the process of neurons in the human brain. They made a comeback in the form of Deep Learning when AlphaGo was able to beat the world champion, Ke Jie, in a board game that is highly difficult and confusing for individuals back in 2017. The history of artificial intelligence has shaped us as a society and the workspace as a whole.

https://gibsic.wordpress.com/2018/07/01/7-decades-of-artificial-intelligence-history-2morrowknight/
https://www.youtube.com/watch?v=056v4OxKwlI

Haenlein, M., & Kaplan, A. (2019). A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. California Management Review, 61(4), 5–14. doi: 10.1177/0008125619864925

--

--