Part 1 of 3 | Chronicle of Artificial Intelligence

Humans vs. AI: The Rise

Uncovering the topsy-turvy Journey of Artificial Intelligence, from creation to future predictions

Md Islam
ILLUMINATION
Published in
10 min readFeb 24, 2023

--

Created with Dall-e by Author

Generative AI is taking the world by storm. We are witnessing an unprecedented surge in the use of Artificial Intelligence worldwide. Some are even predicting an inevitable doomsday, owing to the sci-fi fantasies.

But don’t you wonder how it all began?
Did such magical advancement happen overnight?
Where does it leave us, the citizens of this newfound digital world order?

I decided to dig deeper into AI and uncover these mysteries. And in the end, will shed some light on the future of this battle of minds.

I will focus on three major aspects of this revolutionary technology, one in each Part of the story.

· Part 1: Different groundbreaking developments that led to the current state of the AI

· Part 2: Human Intelligence and Artificial Intelligence come head to head

· Part 3: Dilemmas and predictions on the future of AI

Before we move any further, let’s explore some basic terminologies:

What is Artificial Intelligence (AI)?

AI is the ability of machines to perform tasks that often need human intelligence. It’s Based on algorithms and computer programs that simulate human thought processes. Artificial intelligence can recognize patterns, make predictions, and even learn unsupervised.

What is a Neural Network?

An artificial neural network is a series of algorithms that recognizes data like human neural networks. Human brains and how it works through the network of neurons have inspired the idea. Neural Network uses applications like speech recognition, natural language processing, and predictive analytics.

What is Machine Learning?

Machine learning is a subfield of artificial intelligence. It involves the development of algorithms that enable computers to learn and make predictions. Machine learning algorithms learn patterns and relationships in the data through experience. Then it uses this learning to make predictions or decide on new, unseen data.

Timeline of Invention and development of AI:

This section will uncover the landmarks that brought us to the current state of the tech. I tried to highlight breakthroughs without going into minute details. The main purpose of this piece is to provide a chronological understanding of the development of AI.

Created by Author with Canva

820 — Invention of Algorithm

It all began with a genius Persian scientist, Al Khwarizmi. He invented the first method of solving complex mathematical problems with rules. His name is the origin of the word Algorithm.

1642 — Invention of Mechanical Calculator

Blaise Pascal, a French mathematician, and physicist developed a machine called Pascal’s Calculator. It could perform addition and subtraction. This was the first time a device solved a mathematical problem using an Algorithm.

1800 — Invention of the Computer

English mathematician Charles Babbage designed the first mechanical computer in the 1800s. He invented an analytical engine. It used punch cards to store data and perform calculations.

1936 — Invention of the Turing Machine

Alan Turing was an English mathematician and computer scientist. He is often called the father of computer science. In 1936, he invented the Turing Machine. It was a theoretical machine that could perform any calculation that a human could. This invention laid the foundation for modern computing.

1943 — Foundation of Neural Network

Two psychologists, Warren McCulloch, and Walter Pitts founded the concept of neural network. They proposed the concept in their 1943 paper titled “A Logical Calculus of Ideas Immanent in Nervous Activity”. In that paper, they presented a mathematical model of how neurons in the human brain work. The paper was significant because it provided a theoretical foundation for neural networks. And sparked an interest in using these networks for artificial intelligence.

1950 — Turing test

The Turing test is a test of a machine’s ability to exhibit intelligent behavior. Alan Turing proposed it in 1950 in his paper “Computing Machinery and Intelligence.” In the Turing test, a human evaluator engages in a natural language conversation with a machine and a human without knowing which is which. The machine passes the test if the human evaluator cannot distinguish. The test was a revolution in assessing the progress in the field of AI.

1956 — Birth of Artificial Intelligence

The term “Artificial Intelligence” was first used by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in a conference proposal. In 1956, Dartmouth Conference marked the beginning of AI as a formal academic field. There, researchers discussed the possibility of creating machines that could think like humans.

1958 — Invention of the Perceptron

Frank Rosenblatt, an American psychologist, invented the Perceptron. It was an artificial neural network that could recognize simple patterns. He is often credited as the father of Deep Learning.

The Perceptron received input signals, weighted and combined them to produce an output. Perceptrons are particularly well-suited for binary classification tasks. For example, a perceptron could classify images as either “cats” or “not cats”. They did that based on the presence or absence of certain features. Perceptron is not a very complex neural network. But, it paved the way for developing more advanced techniques like deep learning.

1960 — Development of Expert Systems

During the 1960s, researchers began developing expert systems. These were computer programs able to perform tasks often requiring human expertise. Edward Feigenbaum and his colleagues at Stanford University created the first export system. They called it, “Dendral.” It solved problems in the field of organic chemistry by analyzing mass spectrometry data and suggesting chemical structures that could explain the data. It was one of the first successful applications of knowledge-based systems.

1965 — Creation of ELIZA, the first chatbot

Joseph Weizenbaum at MIT developed ELIZA. It was the earliest example of a natural language processing program. ELIZA simulated conversations with a psychotherapist, using simple pattern matching and scripted responses.

ELIZA worked by analyzing the input text for specific keywords and patterns. Then generated responses based on pre-written scripts. For example, if a user typed “I am sad,” ELIZA might respond with “Why do you feel sad?” or “Tell me more about your feelings.” It created an illusion of conversation without understanding the whole context. ELIZA is often cited as an early example of the “chatbot” or conversational agent. And it inspired many similar programs and applications in the following decades.

Created by Author with Canva

1970 — AI WINTER

AI Winter was a period with a lack of interest in Artificial intelligence R&D.

The term is often used to describe two periods with a significant decline in AI investments.

The first AI winter occurred in the 1970s and 1980s. The potential of AI was not matched by significant development in practical applications. Government agencies and private organizations started cutting funds for AI research. Many researchers left the field.

The second AI winter occurred in the early 2000s. Fueled by the dot-com crash and the failure of several high-profile AI projects.

Today, AI research and development is once again thriving. This period is now called AI SPRING. It’s having significant progress in machine learning, natural language processing, and robotics.

1980 — Second wave of Expert Systems

After a decade-long pause, the second wave of expert systems occurred during the 1980s. It was a significant growth and development period in artificial intelligence (AI). Several key developments characterized the second wave of expert systems:

  • The use of knowledge engineering,
  • The development of shell systems that provide a framework for building expert systems),
  • The expansion of applications, and
  • The emergence of commercial products.

Nevertheless, the limitations of expert systems have also become clear during this time. Leading to a decline in interest and funding in the field by the end of the decade and sparking the Second AI WINTER.

1989 — Invention of the World Wide Web (WWW)

British computer scientist Sir Tim Berners-Lee invented The World Wide Web, aka. the web in 1989. He was working at CERN (the European Organization for Nuclear Research) in Switzerland. Berners-Lee’s vision was to create a system to share and access information, regardless of location or computer type.

The first web page was published in 1991. It consisted description of the World Wide Web project and instructions on how to use it. Over the next few years, the web soared in popularity. Fueled by the development of graphical web browsers, such as Mosaic and Netscape Navigator. They made viewing and navigating web pages easier. Today, the web has become an integral part of modern life.

1995 — Launching of Alta Vista

Altavista was one of the earliest and most popular search engines on the web. It was launched in 1995 by Digital Equipment Corporation (DEC). Altavista was significant in developing artificial intelligence (AI). It was one of the first search engines to use advanced algorithms. It used natural language processing (NLP) techniques to improve search accuracy.

This allowed users to enter search terms in plain language instead of complex Boolean operators. It also used sophisticated algorithms to analyze the content and structure of web pages and to rank search results based on their relevance to the user’s query. Such use of AI was not yet present on the web.

2006 — Launching of AWS (Amazon Web Services). Enter the Era of Cloud Computing

Cloud computing refers to delivering on-demand services over the internet. It includes computing power, storage, and applications, without needing local infrastructure or hardware. Cloud computing enables businesses and individuals to access powerful computing resources from anywhere. And to scale those resources up or down as needed to meet changing demands.

Amazon Web Services (AWS) is one of the world’s largest and most popular cloud computing platforms. It was launched on March 14, 2006, as a subsidiary of Amazon.com. In the beginning, AWS offered only a few services, including Simple Storage Service (S3) and Elastic Compute Cloud (EC2). Over time, AWS expanded its offerings to include many cloud computing services. Services like databases, analytics, machine learning, the Internet of Things (IoT), and more. AWS has been instrumental in developing and adopting AI. It provides the infrastructure and tools needed to develop, train, and deploy AI models at scale.

AWS offers a variety of pre-built algorithms and frameworks, such as TensorFlow and MXNet. This helps build custom models for various use cases, including natural language processing, image and video recognition, and predictive analytics.

Mid-2000 — Rise of IoT (Internet of Things)

The Internet of Things (IoT) concept has been around since the late 1990s. But the term “Internet of Things” was first coined in 1999 by Kevin Ashton. He was a British technology pioneer working at Procter & Gamble at the time.

Ashton used the term to describe connecting everyday objects to the internet. Allowing individuals a way to track and manage inventory and supply chains. However, the concept of the IoT did not gain widespread attention until the mid-2000s when wireless networking, sensor technology, and cloud computing made it possible to connect a wide range of devices and objects to the internet.

The IoT is a growing field, with billions of connected devices and objects worldwide. It’s a technology with the potential to revolutionize the world using AI, in one or many ways.

2011 — Birth of Siri

Siri is a virtual assistant introduced by Apple in 2011 as Part of the iPhone 4S release. SRI International, a research institute based in California, developed the technology. It was later acquired by Apple in 2010.

Siri uses NLP and machine learning algorithms to understand and respond to user requests. It performs tasks such as setting reminders, sending messages, and making phone calls. It was one of the first mainstream applications of AI in consumer devices.

One could say Siri is ELIZA’s more sophisticated and successful offspring. After all, ELIZA was the first virtual assistant to use natural language processing. ELIZA paved the way, but Siri stole the spotlight!

2012 — Development of AlexNet

AlexNet is a convolutional neural network (CNN). Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton developed it in 2012. It was a significant breakthrough in computer vision and had a major impact on the development of AI.

AlexNet achieved state-of-the-art results on the ImageNet Large Scale Visual Recognition Challenge. It was a benchmark competition for image classification. AlexNet outperformed previous approaches. It demonstrated the power of deep learning and convolutional neural networks.

It also demonstrated the importance of using a large dataset for training. In the case of AlexNet, it was the ImageNet dataset, consisting of over one million images.

It helped spark a renewed interest in deep learning and fueled the growth of the AI industry.

2018 — Development of Google BERT

Google BERT (Bidirectional Encoder Representations from Transformers) is an NLP model. A team of researchers led by Jacob Devlin developed it in 2018. BERT changed the field of NLP and AI.

BERT is a pre-trained language model. It uses a deep neural network architecture called Transformers. It is designed to understand the context of words in a sentence by analyzing the words that come before and after each word. This allows it to understand the meaning of complex sentences and phrases better.

What sets BERT apart from previous NLP models is its ability to handle bidirectional language processing. It can consider the entire context of a sentence rather than only the preceding or following words. BERT was trained on a large corpus of text data, which allowed it to learn the patterns and relationships between words in natural language.

The development of BERT was significant in AI because it demonstrated the power of pre-training models on large datasets. Its success has led to the development of other pre-trained language models such as GPT-2 and GPT-3. Those are the precursors of ChatGPT, which is a tale for another time.

Since the launch of ChatGPT in Nov 2022, the world of AI has gone to a whole other level. It moved so fast and spawned so many generative tools and software that only three months now seem like three years.

Finally, it feels like AI has hit the right spot and is heading toward its much-anticipated apex.

Or is it?

Find out in Part — 2 how with all its glory, AI raised eyebrows time and again, when both Humans and AI come head to head.

Till then, Happy Reading!

--

--

Md Islam
ILLUMINATION

Entrepreneur, Writer, and former executive at a Fortune 500. Lover of Poetry and a Dreamer in Disguise. Feel free to contact for an exciting collab. Cheers!