The History of Artificial Intelligence: From Ancient Myths to Modern Machines, from Turing to Tomorrow

Paul E.
16 min readApr 12, 2024

--

by Paul Ekwere, LONDON, April 2024 UPDATE

A humanoid robot playing chess with its human lessee at a Paris café; AI-generated original by author

In the grand theatre of human achievement, few actors have made an entrance as dramatic, controversial, and downright audacious as Artificial Intelligence. Diving into the labyrinth of Artificial Intelligence (AI) history and its probable future, is akin to embarking on a time-traveling escapade, where the line between science fiction and reality blurs faster than a quantum computer solving a Rubik’s cube.

Imagine, if you will, a world where machines not only perform tasks but learn, adapt, and evolve. A world where your toaster might one day outsmart you at chess, and your vacuum cleaner could pen a sonnet to rival Shakespeare.

Welcome, dear reader, to the thrilling, terrifying, and utterly captivating world of Artificial Intelligence.

Ancient Myths and Legends

The idea of creating artificial beings that can think and act like humans or gods can be traced back to ancient myths and legends from various civilizations, long before the term “Artificial Intelligence” was coined. In Greek mythology, the god Hephaestus was the master craftsman who created mechanical servants, such as the bronze giant Talos, who guarded the island of Crete, and the golden maidens, who assisted him in his workshop.

Figure 2: Talos, Guardian of Crete; AI-generated original by author

In Hindu mythology, the king Ravana had a flying chariot called Pushpaka Vimana, which could navigate autonomously and follow his commands.

What is AI?

To begin a discourse on Artificial Intelligence, we must first define it, and to define “Artificial Intelligence”, we must first explore the definition of the word, “Intelligence”.

Artificial Intelligence (AI) is the branch of computer science that aims to create machines and systems that can perform tasks that normally require human intelligence, such as reasoning, learning, decision making, perception, and natural language processing. It essentially aims at human mimicry by computers — the simulation of human intelligence processes by machines, especially computer systems.

AI is not a new concept; in fact, it has a long and fascinating history that spans across diverse cultures, disciplines, and domains. In this blog post, we will explore some of the key milestones and developments in the history of AI, from ancient myths and legends to modern applications and challenges.

The Birth of Modern AI

The term “artificial intelligence” was coined by the American computer scientist John McCarthy in 1956, when he organized a conference at Dartmouth College, where he invited a group of researchers who were interested in creating machines that can simulate human intelligence. The conference is widely considered as the birth of modern AI, as it marked the beginning of a new field of study that attracted funding, talent, and attention. Some of the attendees of the conference were Alan Turing, Marvin Minsky, Claude Shannon, and Herbert Simon, who later became influential figures in AI research.

One of the early achievements of AI was the development of the Logic Theorist, a program that could prove mathematical theorems using symbolic logic[1], created by Allen Newell, Herbert Simon, and Cliff Shaw in 1955. Another milestone was the creation of ELIZA, a natural language processing program that could mimic a psychotherapist, created by Joseph Weizenbaum in 1966. ELIZA was one of the first examples of a chatbot, a computer program that can converse with humans using natural language.

The Rise and Initial Decline of AI

The 1960s and 1970s witnessed a rapid growth and expansion of AI research, as many subfields and applications emerged, such as computer vision, speech recognition, knowledge representation, expert systems, robotics, and machine learning. AI also received significant support and funding from the military and the government, especially during the Cold War and the Space Race. However, AI also faced many challenges and limitations, such as the difficulty of scaling up and generalizing the solutions, the lack of common sense and contextual understanding, the brittleness and unreliability of the systems, and the ethical and social implications of the technology. These factors led to a period of reduced interest and funding for AI, known as the “AI winter”, which lasted from the late 1970s to the late 1980s.

The Resurgence of AI

The 1990s and 2000s saw a revival and resurgence of AI, thanks to several factors, such as the availability of large amounts of data, the increase of computational power and storage, the development of new algorithms and methods, such as neural networks and deep learning, and the emergence of new domains and applications, such as the internet, social media, gaming, and e-commerce. AI also became more accessible and ubiquitous, as it was integrated into various products and services, such as search engines, digital assistants, recommendation systems, facial recognition, and self-driving cars. AI also achieved remarkable feats, such as defeating human champions in chess, Jeopardy, and Go, generating realistic images and videos, and creating original music and art.

Machine Learning is a subset of Artificial Intelligence that involves the development of algorithms that can learn from and make predictions or decisions based on data. It enables computers to improve their performance on a specific task over time without being explicitly programmed. Machine Learning has been around since the 1950s, but it has gained significant attention in recent years due to the availability of large amounts of data and increased computational power.

Deep Learning, a subset of Machine Learning, employs multi-layered neural networks for learning and decision-making. These networks are capable of learning features and representations of data at varying levels of abstraction, enabling deep learning models to perform complex tasks like image and speech recognition. Although Deep Learning’s roots trace back to the 1940s, it has gained prominence in the 21st century with the advent of vast data availability and enhanced computational capabilities.

Machine Learning and Deep Learning are valuable as they enable computers to learn from data and produce predictions or results, which can be utilized across a broad spectrum of problems in various domains.

The Rise of Generative AI

Born from the audacious dreams of science fiction and the relentless curiosity of mankind, Generative AI has waltzed onto the stage with all the subtlety of a sledgehammer in a china shop.

Generative AI, sometimes referred to as GenAI, has been conceptualized for decades but has only recently been technically possible. It’s worth noting that Generative AI is different from the theoretical AGI (Artificial General Intelligence), which aims to replicate human-level general intelligence in machines.

Generative AI models are trained on input data and then create new data that resembles it. Transformer-based deep neural networks improved and led to a surge of generative AI systems in the early 2020s. They became very popular in recent years after DALL-E and later OpenAI’s ChatGPT were released in 2020 and OCT 2022 respectively.

Generative AI has become very common in every area of work and personal life. There are many benefits to the development of “Large Language Models” (LLMs like ChatGPT) and other GenAI models. GenAI has increased the availability of advanced AI tools in 2023 and 2024. However, there are also some criticisms, from safety & privacy, to ethics and bias, to commercial structures and paywalls that restrict access to the best GenAI models, possibly creating a bigger social & knowledge gap between the “have’s” and the “have-not’s”, and potentially accelerating societal inequalities.

The Chronology of AI Summarised

Here are some key milestones in the development of AI:

· Greek Myths (Antiquity): Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent automata (such as Talos) and artificial beings (such as Galatea and Pandora).

· 1940s — 1950s: The ‘Birth’ of Artificial Intelligence

· 1943: During World War II, Alan Turing and neurologist Grey Walter were among the bright minds who tackled the challenges of intelligent machines.

· 1950: Alan Turing publishes his paper “Computing Machinery and Intelligence,” introducing the Turing Test.

· 1950: Isaac Asimov, a science fiction writer, picked up the idea of machine intelligence and imagined its future. He is best known for the Three Laws of Robotics16.

· 1956: The term ‘artificial intelligence’ is coined at the Dartmouth Conference16.

· 1965: Developed by Joseph Weizenbaum at MIT, ELIZA is an early example of a natural language processing program.

· 1974: The U.S. and British Governments stop funding undirected research into artificial intelligence.

· 1980s: AI winter due to withdrawal of funding.

· 2020s: AI boom due to successful application of machine learning in academia and industry.

· 2017: Google invents the ‘Transformer’ — a modern AI architecture that will become the underpinning of Generative AI

· 2022: Open AI takes the world by storm with its viral introduction of ChatGPT — the now ubiquitous large language model

· 2023: EU AI Act signed into European Law; other countries developing AI regulation mostly yet to be fully adopted into law

· 2023–2024: The Generative AI race heats up with several key players like Microsoft, OpenAI, Google, Anthropic, & Meta (formerly Facebook) etc. releasing iteratively improving versions of their Generative AI models like Copilot, Gemini, GPT-4V and GPT-4 Turbo, Claude and open source foundation models like the U.A.E’s Falcon 40-B & Meta’s Llama

· 2023–2024: Microsoft brings generative AI to the world of work with Copilot fully integrated into Office 365 apps like Word, Excel, MS Teams, Powerpoint, etc.

· 2023–2024: Pair Programming assistants like Copilot X & Github Copilot, Databricks Assistant & Amazon Bedrock are revolutionising software development

· 2024: Generative AI powered robots showing huge promise in general purpose tasks. Companies like Tesla, BMW, introducing Generative AI-powered humanoid robots in their factories

· 2024: Small language models (SLMs) like Phi-2 and ‘on-device’ AI being introduced by the likes of Microsoft and Samsung as processing power and AI models improve

· 2025 — Future: Humanoid Robots in Homes? General Purpose AI Assistants replace mobile and PC operating systems? Artificial General Intelligence (AGI)?

Asimov’s Laws of Robotics

1. First Law: A robot may not (willingly) injure a human being or, through inaction, allow a human being to come to harm²¹. This is the robotic equivalent of the Hippocratic Oath.

2. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law¹. This is the “I, Robot, am your humble servant” law, ensuring that robots are here to serve us, not the other way around.

3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law²¹. This is the “self-preservation” law, ensuring that robots can’t be casually destroyed by their human overlords.

Asimov later added a Zeroth Law that superseded the others: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm”²². This is the “greater good” law, ensuring that robots consider the welfare of humanity as a whole.

Asimov’s Three Laws of Robotics have had a profound influence on AI research, shaping the way we think about machine behaviour and ethics²⁴.

· Ethical Framework: The laws have grown from a thought experiment into an essential conceptual framework for real-world robotics and AI ethics²⁴. They’ve sparked countless discussions and debates, highlighting the importance of designing AI systems that respect and protect human life²⁴. There are however, interpretations of this, e.g. AI that works ‘for the greater long-term good’ (despite the costs today) that are frowned upon in the AI community as distracting from current AI harms and risks.

· Human Safety: The First Law underscores the importance of ensuring that AI systems do not cause harm to humans²⁴. This has led to the development of AI technologies prioritizing human safety, security, and dignity²⁴.

· Human-AI Collaboration: The Second Law emphasizes the need for AI to obey human instructions, placing humans in control²⁴. This has highlighted the importance of developing AI systems that augment human capabilities, foster collaboration, and empower individuals rather than replace them²⁴.

· Ethical System Behaviour: The Third Law calls for AI systems to protect their own existence as long as it aligns with the first two laws. Asimov’s third ‘self-preservation’ law is mostly not a ‘baked in’ function built into any artificially intelligent systems currently, as it is the most challenging to ethically interpret and implement.

· Influence on Public Perception: Asimov’s laws have also influenced public perception of AI, becoming a cultural touchstone in understanding artificial intelligence²⁵.

These laws have shaped practical discussions on AI ethics, but the fast progress of AI technology requires more investigation into these fundamental rules. As AI improves, ethical issues have become more important, resulting in the creation of ethical standards, AI ethics boards, and research centres focused on AI ethics²⁴.

AI Risks, Ethics, Bias and Responsible AI Development

As AI becomes more integrated into our daily lives, it’s important to consider the potential risks and challenges it presents¹³. Here are some of the key risks associated with AI:

· Lack of AI Transparency and Explainability: AI and deep learning models can be difficult to understand, even for those that work directly with the technology. This leads to a lack of transparency for how and why AI comes to its conclusions¹³.

· Job Losses Due to AI Automation: AI-powered job automation is a pressing concern as the technology is adopted in industries like marketing, manufacturing, and healthcare¹³. By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated¹³.

· Social Manipulation Through AI Algorithms: AI algorithms can be used to manipulate social and political discourse, spread misinformation, and influence public opinion¹³.

· Privacy Violations: AI systems often require substantial amounts of data, which can lead to privacy concerns¹³.

· Algorithmic Bias: AI systems can perpetuate and amplify existing biases if they’re trained on biased data and not trained with the right methodology to spot and mitigate the biases in our data collection methods, the data itself, training and interpretation or use of the AI outputs¹³.

· Gen AI Hallucinations: Hallucinations in AI are nonsensical, or non-factual outputs from generative AI models. They are sometimes very plausible (in the case of text generation) and can lead to the spread of misinformation, amongst other challenges, like the erosion of trust.

· Overreliance on AI & automation: As humans use GenAI tools more, linked to the fact that these models occasionally ‘hallucinate’, we are prone to overreliance on AI tools that mostly ‘sound right’ or mostly operate right. This can lead to several real risks from the banal — made up information in a court case, to the really serious and fatal — self-driving cars running over pedestrians.

· Accelerated Proliferation of Cyberattacks and Other Security issues: The recent advices in GenAI come with certain specific issues related to online security. LLMs and similar foundation models present new attack vectors that bad actors can capitalise on, as well as a new more ‘efficient’ way for bad actors to probe networks, software etc. for weaknesses and capitalize on exploits much quicker and at scale.

For context, I explore a specific type of LLM security issue in this blog post.

This is a non-exhaustive list. There are a lot of other AI risks to consider like Deepfakes in Video and Photo imagery, Voice cloning, Identity theft and fraud of which there have already been two very high profile cases in the media already in 2024.

Addressing these challenges requires a concerted effort to develop AI responsibly, emphasizing ethical considerations, transparency, and inclusivity in AI development and deployment. Initiatives like the Ethics Guidelines for Trustworthy AI by the European Commission and the AI principles outlined by leading AI organizations reflect a growing commitment to responsible AI.

As we progress into a more AI-enabled future, it becomes more an more important that we adopt Responsible AI practices to ensure we use these AI systems in a transparent and fair manner, and that we have taken adequate steps to mitigate bias, harms and other risks to humans and humanity.

The Future of AI

Figure 3: Tesla humanoid bots in testing at a Tesla factory (courtesy, Tesla 2023)

AI is inevitably shaping the future of humanity across nearly every industry, from transportation and manufacturing to healthcare and education. AI is already the main driver of emerging technologies like big data, robotics, and IoT.

In the next decade, we can expect AI to transform the scientific method, become a pillar of foreign policy, and lead to serious government investment. AI will also continue to impact the labour market, potentially leading to long-term changes in work, education, and entertainment.

· AI Regulation: As of 2024, AI regulation focuses on promoting innovation while ensuring ethical use, data privacy, and bias mitigation. Efforts include national laws, international agreements, and industry self-regulation, with significant initiatives in the EU (AI Act) and sector-specific guidelines in the US. Global cooperation aims to harmonize standards, balancing the protection of public interests with the encouragement of technological advancement. More AI regulation is expected globally as a trending topic in 2024 into 2025.

· AI and human collaboration: Rather than replacing human workers, AI can augment and complement their skills and abilities, creating new forms of synergy and productivity. For example, AI can assist doctors in diagnosing and treating diseases, teachers in designing and delivering personalized education, and artists in creating and editing novel works. AI can also help humans in making better decisions, solving complex problems, and discovering new knowledge.

· Quantum AI: The potential integration of quantum computing with AI promises to unlock new frontiers in processing power, making today’s complex problems tomorrow’s trivial tasks.

· AI and ethics: As AI becomes more capable and pervasive, it also raises ethical and moral questions, such as how to ensure its fairness, accountability, transparency, and explainability, how to protect the privacy and security of data and users, how to prevent and mitigate its potential harms and biases, and how to align its goals and values with those of humans. AI also challenges the existing laws and regulations, such as those related to intellectual property, liability, and human rights.

· AI and society: AI can have profound impacts on various aspects of society, such as economy, politics, culture, and environment. AI can create new markets and industries, as well as disrupt and transform existing ones. AI can also influence the distribution and allocation of resources, wealth, and power, creating new opportunities and risks for distinct groups and regions. AI can also affect the social norms and behaviours, such as communication, collaboration, and trust, as well as the cultural diversity and identity, such as language, art, and religion.

· An AI Tax is a proposed fiscal strategy designed to mitigate the economic and social impacts of automation and artificial intelligence (AI), especially concerning job displacement. It could be implemented in various forms, including taxes on AI companies, a direct robot tax, a Value Added Tax (VAT) on AI-generated products/services, or taxes on AI transactions.

· The revenue from such a tax could support social welfare programs, fund education and retraining initiatives, contribute to an AI Universal Basic Income (UBI), or finance research into ethical AI development. However, implementing an AI Tax poses significant challenges, such as defining the tax base, ensuring international cooperation, and balancing the need to not hinder innovation. The goal of an AI Tax would be to leverage the economic benefits of AI and automation while addressing their potential downsides, ensuring a fair and inclusive transition into an increasingly automated future.

Conclusion

In the grand tapestry of human innovation, few threads are as colourful and complex as the history of Artificial Intelligence (AI). This journey from theoretical musings to practical applications has not only reshaped our world but also constantly redefined the boundaries between the possible and the realm of science fiction.

The journey of AI from its inception to its current state has been a fascinating one. As we look forward to the future, one thing is certain: AI will continue to be a major player in shaping our world.

As we continue to develop and integrate AI into our lives, it’s crucial that we remain aware of the potential risks and work to mitigate them, innovating responsibly for a brighter and more equitable future. Whether AI becomes the scaffold of a utopian future or a cautionary tale hinges on our collective vision and the choices we make today.

In the history of artificial intelligence, we are not just observers but active participants shaping the narrative of tomorrow.

@paul_ekwereii

This post was originally published by the author in association with the AI Accelerator Institute. You can read the original publication here —

History of AI: From ancient myths to modern machines, from Turing to tomorrow (aiacceleratorinstitute.com)

References

(1) Updates to the OECD’s definition of an AI system explained. https://oecd.ai/en/wonk/ai-system-definition-update

(2) AI-Principles Overview — OECD.AI. https://oecd.ai/en/ai-principles/

(3) OECD updates definition of Artificial Intelligence ‘to inform EU’s AI …. https://www.euractiv.com/section/artificial-intelligence/news/oecd-updates-definition-of-artificial-intelligence-to-inform-eus-ai-act/

(4) Artificial intelligence — OECD. https://www.oecd.org/digital/artificial-intelligence/

(5) Daugherty, P., & Wilson, H. J. (2018). Human + machine: Reimagining work in the age of AI. Harvard Business Press

(6) Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8

(7) Lee, K. F. (2018). AI superpowers: China, Silicon Valley, and the new world order. Houghton Mifflin Harcourt.

(8) Russell, S. J., & Norvig, P. (2010). Artificial intelligence: a modern approach. Prentice Hall

(9) Nilsson, N. J. (2014). Principles of artificial intelligence. Morgan Kaufmann.

(10) Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press

(11) Domingos, P. (2015). The master algorithm: How the quest for the ultimate learning machine will remake our world. Basic Books

(12) Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf.

(13) 12 Risks and Dangers of Artificial Intelligence (AI) | Built In. https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence

(14) What are the risks of artificial intelligence (AI)? — Tableau. https://www.tableau.com/data-insights/ai/risks

(15) The real risks of artificial intelligence — BBC. https://www.bbc.com/future/article/20161110-the-real-risks-of-artificial-intelligence

(16)Timeline of artificial intelligence — Wikipedia. https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence

(17) AI: 15 key moments in the story of artificial intelligence. https://www.bbc.co.uk/teach/ai-15-key-moments-in-the-story-of-artificial-intelligence/zh77cqt

(18) AI Timeline: Key Events in Artificial Intelligence from 1950–2023. https://www.theainavigator.com/ai-timeline

(19) History of artificial intelligence — Wikipedia. https://en.wikipedia.org/wiki/History_of_artificial_intelligence

(20) 7 Early Imaginings of Artificial Intelligence | HISTORY. https://www.history.com/news/artificial-intelligence-fiction

(21) Three Laws of Robotics — Wikipedia. https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

(22) Three laws of robotics | Definition, Isaac Asimov, & Facts. https://www.britannica.com/topic/Three-Laws-of-Robotics

(23) How Asimov’s Three Laws of Robotics Impacts AI — Unite.AI. https://www.unite.ai/how-asimovs-three-laws-of-robotics-impact-ai/

(24) Asimov’s Laws and Today’s AI: Unlocking the Power of Ethics. https://goliathresearch.com/blog/ai-and-ethics

(25) Asimov’s Three Laws of Robotics, Applied to AI. https://www.psychologytoday.com/gb/blog/the-digital-self/202310/asimovs-three-laws-of-robotics-applied-to-ai

(26) Our AI Overlord: The Cultural Persistence of Isaac Asimov’s Three Laws …. https://emergencejournal.english.ucsb.edu/index.php/2018/06/05/our-ai-overlord-the-cultural-persistence-of-isaac-asimovs-three-laws-of-robotics-in-understanding-artificial-intelligence/

(27) Three laws of robotics | Definition, Isaac Asimov, & Facts. https://www.britannica.com/topic/Three-Laws-of-Robotics

(28) Turing, A. M. (1950). “Computing Machinery and Intelligence.” Mind, 59(236), 433–460 https://psycnet.apa.org/record/1951-02887-001

(29) McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1956). “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1904

(30) Goodfellow, I., Bengio, Y., & Courville, A. (2016). “Deep Learning.” MIT Press. https://link.springer.com/article/10.1007/s10710-017-9314-z

(31) Full Text for the Proposal for The EU AI Act: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

--

--

Paul E.

LLM and AI researcher with a keen interest in AI Safety, Ethics and AI Bias Mitigation. Disclaimer: Opinions on this blog are wholly mine.