“You are a programmer, Harry”

Pedro Meira
Time to Work
Published in
4 min readAug 15, 2019

Machine Learning is equal to Magic Learning

Chapter 1 — The History of Magic Learning

The first case of neural networks was in 1943, when neurophysiologist Warren McCulloch and mathematician Walter Pitts presents a paper about neurons applied to calculus. They decided to create a model of this using an electrical circuit, and therefore the neural network was born.

McCullock and Pitts paper.

1950: Alan Turing created the Turing Test — a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human (“The Imitation Game”). Alan Turing published Computing Machinery and Intelligence, in which he asked: “Can machines think?” — a question that we still wrestle with.

1951: Marvin Minsky and Dean Edmonds built the first randomly wired neural network learning machine — a computer-based simulation of the way organic brains work. The Stochastic Neural Analog Reinforcement Computer (SNARC) learned from experience and was used to search a maze, like a rat in a psychology experiment.

1958: Frank Rosenblatt designed the first artificial neural network (ANN), known in the literature as “Perceptron”. The main goal was pattern and shape recognition.

1959: Arthur Samuel created the first computer program which could learn as it ran: a game which played checkers.

1959: Another extremely early instance of a neural network came, when Bernard Widrow and Marcian Hoff created two models of them at Stanford University. The first was called ADELINE, and it could detect binary patterns. For example, in a stream of bits, it could predict what the next one would be. The next generation was called MADELINE, and it could eliminate echo on phone lines, so had a useful real world application. It is still in use today.

1969: Neural network research stagnated after machine learning research by Minsky and Papert, who discovered two key issues with the computational machines that processed neural networks:

I) Basic perceptrons were incapable of processing the exclusive-or circuit.

II) The computers didn’t have enough processing power to effectively handle the work required by large neural networks.

1970: The first AI’s “winter” — The failure of machine translation and overselling AI’s capabilities led to reduced funding. The AI winter lasted for a few years, but in the early 1980s, the field of AI experienced another high.

1981: Voldemort first downfall. Harry Potter became “The Boy Who Lived”.

Early 1980s: After the effects of the first AI winter had begun to decline, a new era of AI began to start. This time a lot more effort was focused on creating commercial products. As the hype regarding AI increased, researchers feared that the field might not deliver the expected results.

In the following years, the claims of what AI systems were capable of slowly had to face reality. The expert systems at the center of the revolution faced many issues. In 1984, John McCarthy criticized expert systems because they lacked common sense and knowledge about their own limitations

Early 1990s: Second AI Winter — This led to a decrease in funding in AI research. The general interest in AI declined as the expectations could not be met. At this time, many AI companies closed their doors. The AAAI (Association for the Advancement of Artificial Intelligence) conference that attracted over 6000 visitors in 1986 quickly decreased to just 2000 by 1991. Similarly, a decrease in AI-related articles starting in 1987 and reaching its lowest point in 1995.

1997: Deep Blue, an IBM computer, beats world chess champion Garry Kasparov in the first game of a match. The Deep Blue worked by searching from 6–20 moves ahead at each position, having learned by evaluating thousands of old chess games to determine the path to checkmate. Therefore, the AI popularity increased greatly.

1998: Battle of Hogwarts — The final conflict of the Second Wizarding War.

2002: iRobot launches Roomba: the first autonomous robotic vacuum cleaner.

2005: Boston Dynamics presents the BigDog robot.

2008: Google launches Voice Search based on Natural Language Processing (NLP).

2011: Apple launches Siri.

2011: IBM developed Watson: a question-answering computer system capable of answering questions posed in natural language.

2014: At a contest marking the 60th anniversary of Turing’s death, 33% of the event’s judges thought that the chatbot Eugene Goostman was human.

2016: Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go.

2017: Pedro Meira presents a paper using Neural Networks to forecast power values in electrical system data.

2019: Pedro Meira researchs classification techniques based on ML to identify fraudulent consumers of power utilities.

Now:

References:

Lesson 2: “There and back again”

--

--