Artificial Intelligence over a cup of coffee — The Dawn

From inception to reality and beyond

Nilesh Barla
11 min readOct 4, 2019

Every morning as soon as I get up, I make sure that I make a cup of coffee. How soothing it is especially when it is still dark and the fresh air of the dawn invades your innocent mornings. I love it.

www.wallpaperwide.com

Along with the invasion of the fresh air and coffee, the morning also brings the shadow of the new beginning. It is quite fascinating to grasp knowledge and learn something valuable — maybe read a good book or meditate or even go for a run. These fresh moments of life are very fragile indeed and if we don’t handle with care we might lose some of them before even realising what we have lost.

I always make sure that I make the best use of my mornings. And studying is one of the things that I like doing the most.

This brings me to an opportunity to put forward a non-technical narrative of the subject that is trending everywhere i.e., Artificial Intelligence, which I am pursuing, researching and teaching. I just want to put forward the ideas and notions people had towards AI, how it is evolving, where it is and where it is heading. I will also try to cover the effects of AI in the near future — let’s say CE 2050.

This article will contain three parts — the past, present and the future. In this part I will try to portrait the struggles that AI had to overcome to be where it is today — the past.

But to begin with, we need to ask ourselves what is Artificial Intelligence?

Artificial Intelligence

It is derived from two words Artificial and Intelligence. Let’s break them down.

Artificial is something that is not real and which is a kind of fake because it is simulated. The simplest thing what I can think of which is artificial grass or even light. Artificial grass is not real grass nor is light, so the latter is kind of fake. It is used to substitute real grass for various reasons. Artificial grass is often used for sports because it is more resistant and therefore can be used longer than real grass. It is also easier to care for than real grass. And when we talk about artificial light like a bulb or led they can be used whenever we want them to use, unlike sunlight which is wrap-around time and which can only be used when time allows us to use. But that is not the point I want to make. The point is, that there are reasons why some things are artificial and substitute real things.

“Anything that is inspired by nature and is reproduced by the human being by some fashion or process is known as Artificial.

www.wallpaperwide.com

Intelligence is a very complex term. It can be defined in many different ways like logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity and problem-solving.

We call us, humans, intelligent because we all do these mentioned things. We perceive our environment, learn from it and take action based on what we discovered.

The same applies to animals. The interesting point about intelligence on animals is that there are many different species and because of that we can compare intelligence between species.

In both cases — human intelligence and animal intelligence — we talk about Natural intelligence.

Next to humans and animals there has been argued about plant intelligence. Intelligence in plants shows off the kind of different from humans or animals. The main reason is that plants are not having a brain or neural network, but they react to their environment. Plant intelligence is a very interesting topic on its own because plant intelligence is not instantly visible through reactions through movement or lute.

But, one question arises. If both animal and human beings have brains then why do human beings stand on top of the evolutionary cycle?

The answer I got while reading a book called ‘Homo Deus’ by Noah Yuval Harari. He mentioned that

“Humans nowadays completely dominate the planet not because the individual human is far smarter and more nimble-fingered than the individual chimp or wolf, but because Homo sapiens is the only species on earth capable of cooperating flexibly in large numbers. Intelligence and toolmaking were very important as well. But if humans had not learned to cooperate flexibly in large numbers, our crafty brains and deft hands would still be splitting flint stones rather than uranium atoms”.

Coming back to our topic. We should keep in mind that if we talk about Artificial Intelligence (AI) we refer to a subfield of Computer Science.

AI consists of the following subfields which is described in the image below.

Source: Deep Learning — Ian Goodfellow

To understand more about Artificial Intelligence we need to look at the history to see what it is capable of and how his status quo is related to the present.

The dawn of Artificial Intelligence

Artificial consciousness dates back to ancient Greece, where philosopher and mythical stories have paved its way to what we call as Artificial Intelligence.

“I believe that whatever the universe is about to see in the future already exists in our past and present. Nothing is new. It's all hidden in the Womb of the Universe.”

Many authors still believe the same and one amongst them is Pamela Mccorduck author of Machines who think. She describes in her book that old fictional stories like Frankenstein which was published in the early 1800’s involved a creature who was a hideous sapient created in an unorthodox scientific experiment — artificial human. The fictional authors had already prophesied and saw dreams and visions of the world we are living today.

One of the interesting things to note is that most of these scientific ideas and notion comes from fiction and imagination. And it is what has driven the human race to evolve and survive.

Source: Google

The term ROBOT was also derived from a fictional show in the year 1920 named Rossumovi Univerzální Roboti (Rossum’s Universal Robots) written by a Czech writer Karel Čapek, also better known as R.U.R. The play is quite interesting, because of different reasons. Apart from introducing the term robot, it also tells the story of the creation of robots — some kind of artificial intelligence — which first seems to be a positive effect on humans.

Fiction portraits one side of the coin.What is the other side of the coin? Well, these crazy ideas would not come into being if scientific innovation and experiments were not involved. Which brings us to the other side of the coin — Science. Hence, we will look upon some factual and conceptual growth and struggles of AI.

Even before the title of the new branch of study was decided, scientist where trying to build algorithm that would mimic human brain — which included it functionality.

One of such algorithms was invented by Alan Turing.

Source: https://www.britannica.com/biography/Alan-Turing

Alan Turing was born on 23rd June 1912 in London. He is widely known because of his encrypted code of the enigma, which was used from Nazi Germany to communicate during the second world war.

Turing’s study also led to his theory of computation, which deals about how efficient problems can be solved. He presented his idea in the model of the Turing machine, which is today still a popular term in Computer Science. The Turing machine is an abstract machine, which can, despite the model’s simplicity, construct any algorithm’s logic.

Because of discoveries in neurology, information theory and cybernetics at the same time researchers along with Alan Turing came up with an idea that :

it is possible to build an electronic brain which can think and develop a human intuition”.

Some years after the end of World War 2, Turing introduced his widely known Turing Test, which was an attempt to define machine intelligence. The idea behind the test was that are machine or a system — a computer — is then called intelligent, if a machine A and person B communicate through natural language and a second person C, a so-called elevator, can not detect which of the communicators A or B is the machine.

Inspired by theory such as the Turing Test, many different approaches in the following years took place.

Artificial Intelligence was initially inspired by the idea to understand the working of the brain. The modern artificial Intelligence idea goes beyond the neuroscientific perspective. Though the initial idea was still inspired by the brain.

Warren S. McCulloch(left), Walter Pitts(right.) Source: Google

The McCulloch-Pitts neuron was an early model of the brain with a linear function. It could only recognise two categories of the input by testing whether the f(x,w)is positive or negative. The weights given in this model was a manual operation which was a tedious job. Any wrong input of weighted parameter and the experiment will go wrong. Which turned out to be the biggest flaw.

A new idea of intelligence was required. Hence we come to the Dartmouth Conference.

The founding fathers of AI. Source: Google

The Dartmouth conference was the dawn of Artificial Intelligence. Like the shadow of the new beginning was approaching the human race teaching them the questions to be asked the summer of 1956 was going to revolutionise the world in a profound way. A handful of scientists met to talk about the work they were doing toward making machines behave intelligently. In the popular conception, a computer is a high-speed numerical calculator. A view that was only partly correct. These scientists — who were mathematicians, as psychologists, as electrical engineers — wanted to extend the application of the computer making it to behave intelligently. Most of the idea relied on mathematical equations. The four scientists — John McCarthy, a young assistant professor of mathematics at Dartmouth who organised the conference, Marvin Minsky, a Harvard Junior Fellow in mathematics and neurology, Nathaniel Rochster manager of information research at IBM’s research center in Poughkeepsie, New York, and Claude Shannon, a mathematician at Bell Telephone Laboratories who was already well known for his statistical theory of information — agreed on a proposal which said :

“We propose that a two-month, ten-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” — Pamela Mccorduck : Machines who think

The conference was not only restricted to mathematician and neurologist but a wide range of audience also participated that included streams of: mathematics, biology, engineering, statistics, linguistics and emerging fields of science as well.

John Mccarthy coined the term ‘Artificial Intelligence’ which was not appreciated by many. Some said that it was unreal and bogus. Even Newell and Simon who were going to be the founding father of AI did not like the term. Shannon wanted the name or term to be “Automata Studies”, but Mccarthy didn’t seem to like that while they were working together in a book which contains paper on subject that Mccarthy liked. Shannon said that the name is not appropriate for the scientific community. But Mccarthy insisted on the term artificial intelligence in the Dartmouth conference and hence the term “Artificial Intelligence” was born irrespective of others disagreement.

Source: Wikipedia

The Dartmouth conference opened new doors in the field of AI. In the year 1958, Frank Rossenblatt invented the perceptron which became the first model to learn weights. The perceptron was a single layer neural network with its own limitations. Since perceptron also belong to the class of linear model they share the same limitations. Mostly they cannot learn XOR function. This made the critics to barge into the scientists and their inventions. This put the research into deep freeze leaving no one to work on this area.

Other inventions that happened on neural networks during the first era of AI were:

  • Logic theorist (Simon, Newell and Cliff, 1955)
  • Chess Machine (Alex Bernstein, 1956)

Even though there was a dispute and de-arrangement in AI community there were remnants who still believe that they could achieve the answers that were unknown. One of them was a Candian scientist Geoffery Hilton.

Yann LeCun (left), Geoffrey Hilton (center), Yoshua Bengio (left). Source: Verge
Dean Pomerleau with ALVINN. Source: Verge

Hilton started his AI journey in the mid 1980’s were he joined the University of Toronto. Where he started working on multilayer perceptron, where the hidden layers were more in number. Using the idea of a multilayer perceptron Dean Pomerleau made a self-driving car in the late 1980’s. He called it ALVINN, Autonomous Land Vehicle in a Neural Network.

Other research like Kernel Machine and graphical models displayed a good results in many important tasks.

Yann LeCun in 1990’s build a system that could recognise the written digits, which is still being used now — known as Convolutional Neural Networks.

The Canadian Institute for Advanced Research, i.e. CIFAR, played a major role in the advancement of the AI research. Neural Computation and Adaptive Perceptron (NCAP) was another initiative led by CIFAR which was led by Geoffrey Hilton, Yoshua Bengio and Yann LeCun. This program involved neuroscientist, and experts in human and computer vision. Since then it was no stopping.

As the advancement in computer science and IT industry began to grow, new microprocessors and improved hardware began to crash the market which ultimately led to faster computational time and results. Another driving force for the AI research was the data. Since the internet was blooming and every industry and entrepreneur wanted to try their hands on it, digital footprint started to increase and data mining started to flourish which indeed led to the growth of a new era, Artificial Intelligence.

The dawn of AI covers the following biological neural networks in chronological order:

  • Perceptron (Rosenblatt, 1958, 1962)
  • Adaptive linear element (Widrow and Hoff, 1960)
  • Neocognitron (Fukushima, 1980)
  • Early back-propagation network (Rumelhart et al., 1986b)
  • Recurrent neural network for speech recognition (Robinson and Fallside, 1991)
  • Multilayer perceptron for speech recognition (Bengio et al., 1991)
  • Mean field sigmoid belief network (Saul et al., 1996)
  • LeNet-5 (LeCun et al.,1998b)
  • Echo state Network (Jaeger and Haas, 2004)
  • Deep belief network(Hilton et al., 2006)
  • GPU-accelerated deep neural network (Raina et al.,2009)
  • Unsupervised convolutional networks (Jarett et al., 2009)
  • GPU-accelerated multilayer perceptron (Cireasan et al. 2010)
  • OMP-1 network (Coates and Ng., 2011)
  • Distributed autoencoder (Le et al.,2012)
  • Multi-GPU convolutional network (Coates et al., 2013)
  • COTS HPC unsupervised convolutional network (Coates et al., 2013)
  • GoogLeNet (Szegedy et al., 2014a)

AI like any other technology has been through a lot difficulties. But the best part is that it is very new and emerging. I recall the era of quantum physics, the period between late 1980’s till the mid 1900’s. Quantum physics with all it might was growing and blooming. The pioneers of physics were the heroes during that time and now it is artificial intelligence. And the best part is it all happening in front of our eyes and we are witnessing the unfathomable.

That’s all with this part of the article. We find more upon modern approaches of AI in the next section.

Faith + Hope + Love

--

--

Nilesh Barla

Founder @PerceptronAI who loves to research, build and teach. At times I paint, play guitar and run.