The AI Odyssey: Exploring the Past, Present, and Future of Artificial Intelligence

Viktoria Aghabekyan
AI Odyssey
Published in
13 min readOct 16, 2023

--

“Come, my friends. ’Tis not too late to seek a newer world.” Ulysses, Alfred Tennyson¹

What do you know about Artificial Intelligence, and where do you know it from? When you hear ‘Artificial Intelligence’, do you think of a robot revolution, or perhaps something you read from a buzzword-saturated article (or perhaps just the heading, if you are impatient)?

We live in revolutionary times — researchers use AI to extract words on ancient scrolls burned by Vesuvius, generative AI is used to make real news anchors report fake stories, and models(in this part of the medium article there is an underlined empty space but here there isn’t) improve image quality in video streaming or help autonomous vehicles identify road hazards in real-time. It is both invigorating and intimidating, as we live in a reality where we understand so little about something rapidly evolving and integrating itself into our day-to-day lives.

This is the reason why the Bocconi Statistics and Data Science Student Association is presenting the AI Odyssey newsletter, where AI-relevant concepts will be explained to you in a digestible and structured manner. This newsletter is organized like a mind map: to understand the more complex topics, you should be acquainted with the more basic building blocks.

We shall start from the very beginning. What is AI and what lies at the core of one of the hottest buzzwords today?

AI’s Journey Through Time: From Fiction to Reality

The concept of an artificially intelligent robot became widely known through science fiction in the 20th century. However, the idea of the robot appeared a bit earlier. The first robot, a self-acting spinning mule, appeared in British factories in 1835 and was a “machine apparently instinct with the thought, feeling, and tact of the experienced workman”² — an invention with the purpose of stopping the strikes by workers of the cotton-spinning industry.

The word itself, robot, comes from an old Church Slavonic word, robota, which means “servitude” or “forced labour”. The word originated as a result of the central European system where a tenant’s rent was paid in labour.³ The news of the self-acting spinning mule, named “Iron Man”, was not taken lightly by the union, as you would expect.

A noteworthy example hinting at the potential threat of technological progress to humanity is found in Samuel Butler’s satirical novel “Erewhon; or, Over the Range” (1872). Like many utopian tales, this novel is set in a seemingly ideal world — Erewhon, a fictional land cleverly named as an anagram of ‘nowhere’.

Here is a fun, but unrelated fact: the word utopia was a pun by Sir Thomas Moore (1477–1535) and was coined from the Greek ou-topos, meaning ‘nowhere’, which is almost identical to the Greek eu-topos, meaning ‘a good place’.⁴ Consider this for reflection.

Erewhon is one of the celebrated works of literature addressing the rapid evolution of machinery. One of its most prominent chapters, titled ‘The Book of the Machines’, mentioned that machinery was forbidden in the land of Erewhon, as it had become evident that machines were evolving too quickly. Surely such a renowned work of literature would reflect the sentiments of its readers.

Think of it — it took organic life millions of years to reach its current level of development, and the Rise of the Machines which began in the mid-18th century, has been progressing and accelerating exponentially, irreversibly transforming humanity.

Figure 1. The exponential development of technology

In high school, I read Aldous Huxley’s Brave New World, a satirical novel about the author’s vision of an authoritarian, technologically advanced utopian society. One of the main objectives of this work was to warn of the perils associated with rapid technological progress. The title of the novel comes from Shakespeare’s The Tempest, where the character Miranda exclaims, ‘O wonder! How many goodly creatures are there here! How beauteous mankind is! O brave new world, That has such people in ’t!’⁵ In the context of Huxley’s novel, the term “brave new world” is used sarcastically, to give a warning of a superficially ideal world aided by machines, which would come at the cost of the true spectrum of human life.

To prevent myself from delving further into the Rise of the Machines or dystopian novels addressing it, I will add links at the end of the article for some further reading on the subject.

All these warnings we have been giving and receiving of the robot revolution, in all forms of human expression, from movies such as I, Robot or Terminator, to works of literature such as those I mentioned previously, create a certain perception of AI. However, I believe we would fear it less if we understood it better.

As I already mentioned, in the first half of the 20th century, the world had a faint idea of what Artificial Intelligence was, through the science fiction genre. By the 1950s, there was a generation of scientists and mathematicians had embraced the idea of AI as a part of their cultural awareness. Among these forward-thinkers was Alan Turing, who is often referred to as ‘the father of modern computing’.⁶ Turing raised a fundamental question: if humans can utilise information and reasoning to solve problems and make decisions, why couldn’t machines do the same? This question formed the logical basis for his paper, ‘Computing Machinery and Intelligence’, where he explored the concepts of building intelligent machines.

“We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried. We can only see a short distance ahead, but we can see plenty there that needs to be done.”⁷

However, realising this idea’s potential was a) costly — in the early 1950’s it cost $200,000 monthly to lease a computer, and b) too advanced for the current state of technology — computers before 1949 could only execute commands and not store them.

Let us skip ahead. From 1957 to 1974, following the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI), a significant event where the top researchers from various fields agreed that AI was feasible, AI began to thrive: technology developed, which meant computers were faster, more accessible and had more storage, and machine learning algorithms improved. In 1958, Frank Rosenblatt developed the perceptron, which became the foundation for the Neural Networks we use today.⁸

“An IBM 704 — a 5-ton computer the size of a room — was fed a series of punch cards. After 50 trials, the computer taught itself to distinguish cards marked on the left from cards marked on the right.”⁹

In 1966, Joseph Weizenbaum created Eliza, a program capable of engaging in conversations with humans, a step towards the interpretation of spoken language.

In 1979, Kunihiko Fukushima explained neocognitron, a hierarchical network used for pattern recognition.¹⁰

In 1998, Lecun et al released ‘Gradient-based learning applied to document recognition’, which reviewed various methods applied to handwritten character recognition.

I am sure you can observe the trend.

While these historical threads provide intriguing glimpses into the evolution of AI, I’ll now pivot to the more technical aspects of this fascinating field. However, if you’re hungry for more insights into AI’s past, stay tuned for an upcoming section in the AI Odyssey dedicated to its rich history.

Now, having sailed through the history behind AI, let’s dive into the technicalities.

AI Unboxed

What is artificial intelligence? A number of definitions have surfaced for the time that AI has been around, but, as defined by John McCarthy in his 2004 paper, “It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

What is Machine Learning (ML)? ML is a subset of AI that “focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.”¹² ML is the bedrock of numerous innovative products we use today, such as the Netflix algorithm (a mix of content-based and collaborative filtering-based systems), self-driving cars, virtual personal assistants like Siri and Alexa, and medical diagnosis and treatment recommendations.

ML algorithms are classified into three main categories: Supervised Learning, Unsupervised Learning, and Reinforcement Learning.

Supervised learning learns the relationship between input (X variables) and output (Y variable). It is “defined by its use of labeled datasets to train algorithms that classify data or predict outcomes accurately”. Supervised learning is categorised into 1) Classification and 2) Regression algorithms.¹³

To help you visualise this more clearly, think of all of the spam emails you receive. A supervised learning algorithm such as decision trees, random forest, and support vector machine, can detect spam e-mails and filter them. If you are unfamiliar with these terms, it might seem like a foreign language to you, but do not worry about knowing what they all mean yet, we will elaborate on them further along the journey.

Classification is a type of supervised ML algorithm where the algorithms learn from the data to predict an outcome or event in the future. For instance, a bank has a dataset:

X or Features = [credit history, loans, investment details, etc.]

Y or Target = Has Customer A defaulted in the past? This is usually represented by a boolean value.

The classification algorithm chosen by the bank will output a discrete outcome, False/True or 0/1. This is known as binary classification. However, if the bank’s Target contained more than two possible values, it would be called multiclass classification. Here are some examples of classification algorithms:

  • Logistic regression (Figure 2)
  • Naive Bayes
  • K-Nearest Neighbours
  • Decision Trees
  • Support Vector Machines*
Figure 2. Logistic regression

Regression is a type of supervised ML algorithm where the algorithm learns from data to predict continuous values. For instance, you could have a dataset where:

X = [number of rooms, number of bathrooms, balcony(True/False), neigbourhood, etc.]

Y = price of the house

Figure 3. Regression

A regression algorithm such as Linear Regression, Random Forest Regression, or Neural Networks would effectively make this prediction.

The primary distinction between supervised and unsupervised machine learning lies in the use of labeled data. In supervised learning, the data is “labeled”, meaning it contains both input (X) and output (Y). To further illustrate this difference, imagine supervised learning as akin to a student who receives graded papers, with clear feedback guiding their learning process.

On the other hand, unsupervised learning can be likened to a gamer exploring a vast, open-ended video game. In this scenario, the learner interacts with the data and makes decisions based on those interactions without the direct guidance of labeled outcomes. Unsupervised learning is a journey of discovery, where the system uncovers patterns and relationships within the data independently.

Figure 4. AI, ML, and DL

While this article focuses on supervised learning, future publications in the AI Odyssey will delve into unsupervised, semi-supervised, and reinforcement learning, providing a comprehensive understanding of these facets of machine learning.

Deep Learning is a subset of machine learning which conceptually is based on how the human brain processes information — by learning from examples. Deep learning is a neural network, and its applications range from self-driving cars to the digital assistants we use every day.

The architecture of neural networks is inspired by the human brain — it mimics how signals are sent from one biological neuron to another.

Figure 5. Neural Network

Picture neural networks as the digital equivalents of the human brain. These networks are comprised of hidden layers of artificial neurons. These neurons are connected with weighted coefficients, just as they are in the brain — closely interacting neurons are connected with thicker synapses. We can imagine each neuron resembling a decision-maker, similar to how skilled detectives collaborate to solve complex cases.

These artificial neurons meticulously assess information, sifting through it to determine its relevance to the task at hand. In doing so, they function collectively, much like detectives with varied expertise collaborating on solving a mystery.

What makes neural networks extraordinary is their capacity to learn. Unlike their human counterparts, they don’t require rest or breaks. Instead, they tirelessly practice and refine their problem-solving abilities. The more puzzles they tackle, the sharper they become.

The real magic unfolds when these layers of neurons are stacked. Just as detectives bring their unique skills to the table, each layer in a neural network specializes in different aspects of data analysis. The initial layer might focus on rudimentary features, like recognizing lines or edges in an image. Subsequent layers build upon this foundation, combining these features to identify more intricate patterns, such as faces or objects.

The applications of neural networks are far-reaching. They are the driving force behind a wide range of technologies, from enabling your photo app to identify your cat to helping self-driving cars navigate their surroundings. They represent a vital component of the role artificial intelligence plays in our lives.

Now that you are equipped with the history of artificial intelligence, the difference between AI, machine learning, and deep learning, and a short introduction to neural networks, I pave the way to the conclusion of this article.

Conclusion

Artificial intelligence, despite its notoriety and dystopia-like, intimidating reputation, is one of the most useful tools humans can use, with good intent. Just as Iron Man, the self-acting spinning mule, was the inception of the Industrial Revolution, AI will be the inception of a new era of the Rise of the Machines. The more we understand the brand-new, powerful force that is increasingly permeating our lives by the minute, the better equipped we are to preserve the true essence of human existence and avoid the risk of succumbing to a shallow and polished existence, as cautioned by Huxley, Butler, and many others. With this, I conclude the first phase of the AI Odyssey. More to come soon.

Further reading

Rise of the Machines. Smithsonian Institution Libraries

Researchers use AI to read word on ancient scroll burned by Vesuvius. The Guardian

In A New Era Of Deepfakes, AI Makes Real News Anchors Report Fake Stories. Forbes

AI model speeds up high-resolution computer vision. MIT

Introduction to Unsupervised Learning. Datacamp

Professors: Perceptron Paved the Way for AI 60 Years Too Soon. Cornell University News

SciFi Robots. Gresham College. [Online Lecture]

Huxley, Aldous. Brave New World. New York: Harper & Brothers, 1932

Footnotes

[1]: SparkNotes, “Tennyson’s Poetry: Section 4,” accessed October 13, 2023.

[2]: Gresham College, “SciFi Robots,” accessed October 13, 2023.

[3]: Science Friday, “The Origin of the Word ‘Robot’,” accessed October 13, 2023.

[4]: British Library, “Industrial Revolution and the Standardization of Machine Parts,” accessed October 13, 2023.

[5]: William Shakespeare, The Tempest (Cambridge: Harvard University Press, 1958).

[6]: University of Manchester, “Alan Turing,” accessed October 13, 2023.

[7]: Alan M. Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (1950): 433–460.

[8]: Harvard Medical School — Science in the News, “The History of Artificial Intelligence,” accessed October 13, 2023.

[9]: Cornell University, “Professors: Perceptron Paved the Way for AI 60 Years Too Soon,” accessed October 13, 2023.

[10]: LeCun, Yann, Bottou, Léon, Bengio, Yoshua, & Haffner, Patrick. 1998. “Gradient-Based Learning Applied to Document Recognition.” Proceedings of the IEEE 86, no. 11 (1998): 2278–2324, accessed October 13, 2023.

[11]: McCarthy, John, 2007. “What Is Artificial Intelligence?” Stanford University, accessed October 13, 2023.

[12]: IBM, “Machine Learning,” accessed October 13, 2023.

[13]: IBM, “Supervised Learning,” accessed October 13, 2023.

[*]: Hyperlinks will be added once articles about these algorithms are published.

Bibliography

British Library. “Industrial Revolution and the Standardization of Machine Parts.” Accessed October 13, 2023. https://www.bl.uk/learning/timeline/item126618.html.

Cornell University. “Professors: Perceptron Paved the Way for AI 60 Years Too Soon.” Accessed October 13, 2023. https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soon

DataCamp. “A Beginner’s Guide to Supervised Machine Learning.” Accessed October 13, 2023. https://www.datacamp.com/blog/supervised-machine-learning.

Gresham College. “SciFi Robots.” Accessed October 13, 2023. https://www.gresham.ac.uk/watch-now/scifi-robots.

Harvard Medical School — Science in the News. “The History of Artificial Intelligence.” Accessed October 13, 2023. https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/.

Hindawi. “Evolution of Artificial Intelligence in Neuroscience.” Accessed October 13, 2023. https://www.hindawi.com/journals/scn/2022/1862888.

Huxley, Aldous. Brave New World. New York: Harper & Brothers, 1932

IBM. “Deep Learning.” Accessed October 13, 2023. https://www.ibm.com/topics/deep-learning.

IBM. “Machine Learning.” Accessed October 13, 2023. https://www.ibm.com/topics/machine-learning.

IBM. “Neural Networks.” Accessed October 13, 2023. https://www.ibm.com/topics/neural-networks#:~:text=Neural%20networks%2C%20also%20known%20as,neurons%20signal%20to%20one%20another.

IBM. “Supervised Learning.” Accessed October 13, 2023. https://www.ibm.com/topics/supervised-learning.

LeCun, Yann, Bottou, Léon, Bengio, Yoshua, & Haffner, Patrick. 1998. “Gradient-Based Learning Applied to Document Recognition.” Proceedings of the IEEE 86, no. 11 (1998): 2278–2324. https://doi.org/10.1109/5.726791.

McCarthy, John. 2007. “What Is Artificial Intelligence?” Stanford University. Accessed October 13, 2023. http://www-formal.stanford.edu/jmc/.

Science Friday. “The Origin of the Word ‘Robot’.” Accessed October 13, 2023. https://www.sciencefriday.com/segments/the-origin-of-the-word-robot/.

Shakespeare, William. 1958. The Tempest. Cambridge: Harvard University Press.

Smithsonian Institution Libraries. “Rise of the Machines.” Accessed October 13, 2023. https://library.si.edu/exhibition/fantastic-worlds/rise-of-the-machines.

SparkNotes. “Tennyson’s Poetry: Section 4.” Accessed October 13, 2023. https://www.sparknotes.com/poetry/tennyson/section4.

Towards Data Science. “Understanding the Difference Between AI, ML, and DL.” Accessed October 13, 2023. https://towardsdatascience.com/understanding-the-difference-between-ai-ml-and-dl-cceb63252a6c.

Turing, Alan M. 1950. “Computing Machinery and Intelligence.” Mind 59, no. 236 (1950): 433–460.

University of Manchester. “Alan Turing.” Accessed October 13, 2023. https://www.manchester.ac.uk/discover/history-heritage/history/heroes/alan-turing/#:~:text=Alan%20Mathison%20Turing%20(1912%E2%80%931954,Park%20during%20World%20War%20II.

--

--