Vera Jane Seegers
4 min readJan 23, 2018

Artificial Intelligence is a hot topic at the moment, because through the years we gained the knowledge to develop AI into real products that are changing our world. But how did it actually started? Since the first Elektro World Fair held in 1939, AI grew out through the years into the most diverse indispensable daily necessities of life. We can give you a list with a bunch of inventions that formed the core of AI today, read here. But it is even more interesting how these technological inventions were created.This started in 1943 with Warren S. McCulloch and Walter Pitts who made a calculus immanent to nerve activity. This became the basic principle of today’s Neural Network architecture (Computer history, 2017). To develop this model furthermore, the term AI was coined for a summer conference at Dartmouth University organized by computer scientist John McCarthy (Harvard University, 2017). The question was how to tackle AI? Some preferred a bottom-up approach with neural networks, and some a top-down approach wherein computer would be pre-programmed with the rules that govern human behaviour. The latter won, and from that time, this model was further developed by various researchers like Marvin Minsky who founded the first AI laboratory at Massachusetts Institute of Technology (MIT) (BBC, 2014).

In the early 1970s, people were worried about AI. Millions had been spent on several robots, but they did not made that much progress. The US Congress was very critical, and professor James Lighthill made a damning conclusion: in his view, machines would never be capable of an “experienced amateur” level of chess. He argued that simple tasks like face recognition would never be happen. The fundings for AI got drastically reduced in this time, which is called the AI winter (BBC, 2014). Today the contrary has been proven, the Facial ID scan became even the ultimate personal password last year. Artificial Intelligence became even smarter than humans, because it defeated the human chess player in 2017 (Ensmenger, 07), and AI was even used to carry out medical analyzes (WildML, 2017). read here more about 2017s AI higlights. So where did the new faith in AI came from after the AI winter? The commercial interest on AI grew because commercial systems started to be more ambitious. Instead of creating general intelligence, these commercial systems focussed on smaller tasks. One of the most successful one was RI operated by Digital Equipment Corporation, they helped to configure orders for new computer systems. In 1986 the company made an estimated $40 million profit per year (Kepos, 453–569).

But there was still one problem: existing expert systems could not crack the problem of imitating biology. AI scientist Rodney Brooks published in 1990 his paper “Elephants Don’t Play Chess” wherein he was inspired by the advances of neurosciences, that started to explain mysteries of human cognition. In this paper Brooks showed that the top-down pre-programming approach was wrong, and he continued the research field into neural networks. Neural networks are inspired on accumulated experiences with spectral stimuli that generate contrast on a constant way. On this way does it became possible to create relationships between the objective world and the subjective color experience, but also the rationalization of visual perspectives without any use of feature detection or image representation. On this way, neural networks can deal with randomly varying neural circuitry (Morgenstern, 2014). In 2008, the “real” problems in AI were slowly resolved. A new feature came out on the Apple iPhone: Google speech recognition. This formed a major breakthrough because in the early days the accuracy never came above 80%. Thousands of powerful computers became be able to run through parallel neural networks, and learned to spot patterns in the vast volumes of data streaming from Google’s many users (BBC, 2014).

Man versus machine became from that specific moment in 2008 the so-called “fight” of the 21st century. Intelligent machines became an everyday reality that is changing our human lives. We believe in AI today more than ever. Since last year we are more focused on Deep Learning Frameworks to optimize machine learning in order to develop important applications that can solve huge human problems, such as medical ones. Google invested more than a billion into self-driving car software, and datasets have to become more open, in order to have access to a precise reflection of our “offline” world (WildML, 2017).

References:

Anyoha, Rockwell. “The History of Artificial Intelligence.” Harvard University 2017. 18–01–2018.
http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/

Britz, Denny. “AI and Deep Learning in 2017.” WILDML. 2017. 6–01–2018. http://www.wildml.com/2017/12/ai-and-deep-learning-in-2017-a-year-in-review/

BBC IWonder. “AI: 15 Key Moments in the Story of Artificial Intelligence.” BBC 2014. 18–01–2018
http://www.bbc.co.uk/timelines/zq376fr

Computer History. “Timeline of Computer History.” Computer History Museum 2015. 18–01–2018
http://www.computerhistory.org/timeline/ai-robotics/

Ensmenger, Nathan. “Is Chess the Drosophila of Artificial intelligence? A Social History of an Algorithm.” Sage Journals Vol 42, Issue 1. (2012): 05–30.

Kepos, Paula. “International Directory of Company Histories.” Vol. 6. St. James Press. (1992): 4–748.

Morgenstern, Yaniv. Rostami, Mohammad. and Purves, Dale. “Properties of Artificial Networks Evolved to Content with Natural Spectra.” Vol 111. Suppl. 3 (2014): 10868–10872.

Vera Jane Seegers

Master New Media & Digital Culture & writer for @BIT students