Machine Learning…Artificial Intelligence…Data mining..Data Science….Data Analytics..
Well, if you are even remotely related to the technological field, chances are you may be aware of the world going gaga over these buzzwords. Though these different buzzwords are being used in a multitude of diverse application areas, at the core, they all mean the same thing — making sense of vast amounts of data in a way that would give out some intelligence to act upon.
Although Machine Learning has now gained prominence owing to the exponential rate of data generation and technological advancements to support it, its roots lie way back in 17th century. People have been attempting to make sense of data and processing it to gain quick insights since ages.
Let me take you through an interesting journey down the history of Machine Learning — how it all began and how did it come to what it is today.
1642- Mechanical Adder
One of the first mechanical adding machines was designed by Blaise Pascal. It used a system of gears and wheels such as the one found in odometers and other counting devices. Well, one might think what is a mechanical adder doing in the history of Machine Learning, but look closely and you will realise that it was the first human effort to automate data processing.
Pascal was led to develop a calculator to ease the laborious arithmetical calculations his father had to perform as the supervisor of taxes in Rouen. He designed the machine to add and subtract two numbers directly and to perform multiplication and division through repeated addition or subtraction.
It had a very interesting design. The calculator had spoke metal wheel dials, with the digit 0 through 9 displayed around the circumference of each wheel. To input a digit, the user placed a stylus in the corresponding space between the spokes and turned the dial until a metal stop at the bottom was reached, similar to the way the rotary dial of a telephone is used. This displayed the number in the windows at the top of the calculator. Then, one simply redialed the second number to be added, causing the sum of both numbers to appear in the accumulator.
One of its most prominent features was the carry mechanism which adds 1 to 9 on one dial, and when it changes from 9 to 0, carries 1 to the next dial.
1801- First Data Storage through the Weaving Loom
Storing data was the next challenge to be met. The first use of storing data was in a weaving loom invented by Joseph Marie Jacquard that used metal cards punched with holes to position threads. A collection of these cards coded a program that directed the loom. This allowed for a process to be repeated with a consistent result every time.
Jacquard’s loom utilized interchangeable punched cards that controlled the weaving of the cloth so that any desired pattern could be obtained automatically. These punched cards were adopted by the noted English inventor Charles Babbage as an input-output medium for his proposed analytical engine and were used by the American statistician Herman Hollerith to feed data to his census machine. They were also used as a means of inputting data into digital computers but were eventually replaced by electronic devices.
1847- Boolean Logic
Logic is a method of creating arguments or reasoning with true or false conclusions. George Boole created a way of representing this using Boolean operators (AND, OR, NOR) and having responses represented by true or false, yes or no, and represented in binary as 1 or 0. Web searches still use these operators today.
1890 - Mechanical System for Statistical calculations
Herman Hollerith created the first combined system of mechanical calculation and punch cards to rapidly calculate statistics gathered from millions of people. Known as the tabulating machine, it was an electromechanical device designed to assist in summarizing information stored on punched cards. The U.S. 1880 census had taken eight years to process. Since the US constitution mandates a census every ten years, a larger staff was required to pace up the census calculation. The tabulation machine was developed to help process data for the 1890 U.S. Census. Later models were widely used for business applications such as accounting and inventory control. It spawned a class of machines, known as unit record equipment, and the data processing industry.
1950 - The Turing Test
Alan Turing, an English mathematician who pioneered artificial intelligence during the 1940s and 1950s, created the “Turing Test” to determine if a computer has real intelligence. To pass the test, a computer must be able to fool a human into believing it is also human.
According to this kind of test, a computer is deemed to have artificial intelligence if it can mimic human responses under specific conditions.
In the basic Turing Test, there are three points. Two of the points are operated by humans, and the third point is operated by a computer. Each point is physically separated from the other two. One human is designated as the questioner. The other human and the computer are designated the respondents. The questioner interrogates both the human respondent and the computer according to a specified format, within a certain subject area and context, and for a preset length of time (such as 10 minutes). After the specified time, the questioner tries to decide which point is operated by the human respondent, and which one is operated by the computer. The test is repeated many times. If the questioner makes the correct determination in half of the test runs or less, the computer is considered to have artificial intelligence, because the questioner regards it as “just as human” as the human respondent.
1952 - First Computer Learning program
In 1952, Arthur Samuel wrote the first computer learning program. The program was the game of checkers, and the IBM and the computer improved at the game the more it played, studying which moves made up winning strategies in a ‘supervised learning mode’ and incorporating those moves into its program.
1957- The Perceptron
Frank Rosenblatt designed the perceptron which is a type of neural network. A neural network acts like your brain; the brain contains billions of cells called neurons that are connected together in a network. The perceptron connects a web of points where simple decisions are made that come together in the larger program to solve more complex problems.
1967 -Pattern Recognition
The “nearest neighbor” algorithm was written, allowing computers to begin using very basic pattern recognition. When the program was given a new object, it compared it with the existing data and classified it to the nearest neighbour, meaning the most similar object in memory.This could be used to map a route for traveling salesmen, starting at a random city but ensuring they visit all cities during a short tour.
1979 - The Stanford Cart
Students at Stanford University invented the “Stanford Cart” which can navigate obstacles in a room on its own.The Stanford Cart was a remotely controlled TV-equipped mobile robot.
A computer program was written which drove the Cart through cluttered spaces, gaining its knowledge of the world entirely from images broadcast by an on-board TV system. The Cart used several kinds of stereopsis to locate objects around it in three dimensions and to deduce its own motion. It planned an obstacle-avoiding path to a desired destination on the basis of a model built with this information. The plan changed as the Cart perceived new obstacles on its journey.
1981- Explanation Based Learning
Gerald Dejong introduced explanation based learning (EBL) in a journal article published in 1981. In EBL, prior knowledge of the world is provided through training examples which makes this a type of supervised learning. Given the instruction as to what goal needs to be achieved,the program analyzes the training data and discards irrelevant information to form a general rule to follow.
For example, in chess if the program is told that it needs to focus on the queen, it will discard all pieces that don’t have immediate effect upon her.
1990s-Machine Learning Applications
In the 1990s we began to apply machine learning in data mining, adaptive software and web applications, text learning, and language learning. Scientists begin creating programs for computers to analyze large amounts of data and draw conclusions — or “learn” — from the results.
Machine Learning came to be so called as the advancement of technology now adde it possible to write programs in such a way that, once written, they can keep on learning on its own and evolve as new data gets introduced — with no human intervention required.
2000s - Adaptive programming
The new millennium brought an explosion of adaptive programming. Anywhere adaptive programs are needed, machine learning is there. These programs are capable of recognizing patterns, learning from experience and constantly improve themselves based on the feedback they receive from the world. One example of adaptive programming is deep learning, where algorithms can “see” and distinguish objects in images and videos and this was the core technology behind the Amazon GO stores where people are automatically billed as they walk out of the stores without the need to stand in checkout queues.
This more or less sums up the history of how Machine Learning came to be what it is today. Hope you had a great time down the Machine Learning lane!!