The Humans behind the Evolution of Artificial Intelligence

Beena Ammanath
4 min readNov 5, 2017

--

A Brief History of Artificial Intelligence

Alan Turing, the British mathematician, is widely recognized as being one of the first people to come up with the idea of artificial intelligence in 1950. However the idea of a thinking machine existed as early as 2500 B.C., when the Egyptians sought mystical advice from talking statues. In the Cairo Museum, there is a bust of, Re-Harmakis, an Egyptian God, whose neck reveals the secret of his genius: an opening at the nape just big enough to hold a priest. Automata, the predecessors of today’s robots, date back to ancient Egyptian figurines with movable limbs like those found in Tutankhamen’s tomb.

Copyright: vladgrin

It took the invention of the Analytical Engine by Charles Babbage in 1833 to make artificial intelligence a real possibility. However, Ada Lovelace, who wrote the first ever computer program for the Analytical Engine at that time, did not believe that a computer could ever be as intelligent as a human. In 1843, she proposed that until a machine can generate an idea that it wasn’t designed to, it couldn’t be considered as intelligent as a human. Lovelace’s ideas found their way into modern computing via Alan Turing.

During World War II, as he was working on decoding German communications, Turing discovered Lovelace’s Menabrea translation notes. These were critical documents that helped to shape his thinking. In 1934, Alan Turing, theorized that machines could imitate human intelligence one day, “contrary to Lady Lovelace’s objections”. In his 1950 essay, Computing Machinery and Intelligence, he proposed a test to determine a machine’s ability to “think” like a human. The Turing Test calls for a panel of judges to review typed answers to any question that has been addressed to both a computer and a human. If the judges can make no distinctions between the two answers, the machine may be considered intelligent.

Though his ideas were ridiculed at the time, they provoked thoughts about intelligent machines, and the term “artificial intelligence” entered popular awareness after Turing died.

In 1956 Allen Newell, J. C. Shaw and Herbert Simon introduced the first AI program, the Logic Theorist, to find the basic equations of logic as defined in Principia Mathematica by Bertrand Russell and Alfred North Whitehead. For one of the equations, Theorem 2.85, the Logic Theorist found a new and better proof than the original human inventors of the theorem.

It was also in 1956, Artificial intelligence (AI) as both a term and a science was coined at the Dartmouth conference. The conference included many luminaries from the field at that time. John McCarthy, creator of the popular AI programming language LISP and director of Stanford University’s Artificial Intelligence Laboratory. Marvin Minsky, leading AI researcher and Donner Professor of Science at MIT. Claude Shannon, from Bell Laboratories and Nobel Prize-winning pioneer of information and AI theory. Nathaniel Rochester, designer of IBM 701 and creator of the first assembler and many more. McCarthy is credited to having proposed the term “Artificial Intelligence”.

The attendees of the Dartmouth conference viewed the demonstration of the Logic Theorist. They realized that we now had our first true “thinking machine” — a machine that knew more than its human programmers. By the end of the two month conference, artificial intelligence was hyped with grandiose expectations and predictions.

In 1957, Isaac Asimov, author of the Laws of Robotics, predicted that AI (for which he used the term “cybernetics”) would spark an intellectual revolution, in his foreword to Thinking by Machine by Pierre de Latil he wrote: “Cybernetics is not merely another branch of science. It is an intellectual revolution that rivals in importance the earlier Industrial Revolution. Is it possible that just as a machine can take over the routine functions of human muscle, another can take over the routine uses of human mind? Cybernetics answers, yes.”

In 1978, McCarthy wrote, “human-level AI might require 1.7 Einsteins, 2 Maxwells, 5 Faradays and .3 Manhattan Projects”. He suggested that the critical aspects of human-level intelligence could be expressed in a compact intelligible theory, analogous to a physical theory like special relativity, which could then be applied to artificial intelligence.

Several AI thought leaders predicted that, before the dawn of the 21st century, computers would dominate our lives — a world with household robots, machines that taught and computers conversing in multiple languages, sorting all our books and music albums and also composing music. AI has progressed considerably since the Dartmouth conference and some of the predictions have come true, but the ultimate AI system is yet to be invented.

The ideal AI system would be able to simulate every aspect of a human brain — whether its creativity, empathy, social, consciousness, real world reasoning and not just the parts that can be mathematically formalized — so that it’s indistinguishable whether a response came from a human or a machine. We have made tremendous progress, especially in the past few years but we still have a long way to go. Its truly exciting times for the humans continuing to influence the development of artificial intelligence!

(Source links embedded in the article)

--

--