Artificial Intelligence: A Historical Journey Introduction (AI/ML/DL)

Exploring the Evolution of Artificial Intelligence: Milestones, Breakthroughs, and Future Prospects

Dr Barak Or
metaor.ai
22 min readMar 22, 2024

--

Introduction

The fascination with artificial intelligence (AI) dates back centuries, with ancient myths depicting mechanical beings created by gods. In the scientific arena, early visionaries like Ada Lovelace and Charles Babbage laid the groundwork with their ideas of programmable machines.

AI’s formal journey as a scientific field began in the mid-20th century. Alan Turing, a key figure in AI, introduced the Turing Test in 1950, proposing it as a measure of machine intelligence. This test assesses a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. In 1956, John McCarthy coined the term “Artificial Intelligence” at the Dartmouth Conference, uniting researchers with a shared interest in machine intelligence.

Image by author

The development of AI has seen various highs and lows. Following the initial excitement, the field encountered “AI winters,” periods marked by reduced funding and skepticism. However, the 21st century has seen a revival in AI research, driven by breakthroughs in computational power and the emergence of deep learning techniques.

AI’s evolution highlights the relentless human pursuit of understanding and replicating intelligence. The field has experienced several booms since 1956, particularly during the 1980s with the rise of expert systems, and more recently, with the success of deep learning models in various applications, from image recognition to natural language processing.

Key Milestones and Breakthroughs in AI

AI has seen remarkable milestones that have significantly shaped its trajectory and impact. Some of the key breakthroughs include:

Development of Artificial Neural Networks (1943): The concept of artificial neural networks (ANN) was introduced by Warren McCulloch and Walter Pitts in their paper “A Logical Calculus of the Ideas Immanent in Nervous Activity”. This foundational work laid the groundwork for future developments in machine learning (ML) and pattern recognition.

Success of Expert Systems (1980s): Expert systems, which simulate the decision-making ability of human experts, became widely used in various industries, including medicine, finance, and engineering.

IBM’s Deep Blue Victory (1997): IBM’s Deep Blue made history by becoming the first computer to defeat a reigning world chess champion, Garry Kasparov. This event marked a significant milestone in AI, showcasing the potential of machines to perform complex cognitive tasks.

Impact of Big Data and Computational Power (2000-present): The advent of big data and advancements in computational power have fueled the rapid development of AI. These technological advancements have enabled the training of more sophisticated models and the processing of vast amounts of information.

Breakthroughs in Deep Learning (2012): Deep learning, a subset of machine learning, has revolutionized AI through significant advancements in various fields, particularly in computer vision. One of the most notable contributions in this domain is the landmark paper titled “ImageNet Classification with Deep Convolutional Neural Networks” by Alex Krizhevsky et al. (2012), which introduced the AlexNet architecture. This model demonstrated the remarkable capabilities of deep neural networks by achieving unprecedented performance in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark competition in image classification and object detection. The ImageNet dataset, central to the ILSVRC, contains over 14 million annotated images spanning more than 20,000 categories. Since its introduction in 2010, the dataset has been instrumental in advancing the field of computer vision by providing a large-scale resource for training and evaluating AI models. The success of AlexNet and subsequent deep learning models on the ImageNet challenge has underscored the potential of deep neural networks in processing and interpreting complex visual data, paving the way for further innovations in AI.

Transformer Architecture (2017): The paper “Attention is All You Need” by Vaswani et al. introduced the transformer architecture, which has had a profound impact on natural language processing. This architecture, based on self-attention mechanisms, has become the foundation for many state-of-the-art language models.

Recent Achievements: GPT-3 and AlphaFold (2020): OpenAI’s GPT-3 (Brown et al., 2020) has set new standards in language processing, available for anyone, everywhere. DeepMind’s AlphaFold (Senior et al., 2020) has made groundbreaking advances in predicting protein structures, a critical challenge in biology. These milestones underscore the dynamic and rapidly evolving nature of AI.

Motivation Behind AI Research and Applications

The motivations driving research and applications in AI are diverse reflecting the broad scope and potential of this transformative technology. At its core, AI research is fueled by the desire to understand and replicate human intelligence, with the ultimate goal of creating systems that can reason, learn, and adapt to complex environments.

One of the primary motivations behind AI research is the quest to solve complex problems that are beyond the reach of traditional computational methods. AI offers innovative approaches to tackling challenges in various domains, from deciphering genetic codes to optimizing energy consumption. By harnessing the power of AI, researchers and practitioners aim to develop solutions that can enhance our understanding of the world and improve the quality of life.

The potential of AI to revolutionize industries is another significant driver of research and development. In healthcare, AI-powered diagnostic tools and personalized treatment plans promise to improve patient outcomes and streamline medical processes. In transportation, autonomous vehicles and intelligent traffic management systems are poised to transform the way we commute, reducing accidents and easing congestion. The finance sector is also undergoing a paradigm shift, with AI algorithms enabling more accurate risk assessments and fraud detection.

Beyond practical applications, AI research is motivated by the desire to improve efficiency and productivity across various sectors. By automating routine tasks and optimizing resource allocation, AI technologies can enhance operational effectiveness and drive economic growth. This has implications for manufacturing, agriculture, and service industries, where AI-driven innovations are reshaping business models and value chains.

In his influential book “Life 3.0: Being Human in the Age of Artificial Intelligence”, Max Tegmark explores the profound implications of AI on the future of humanity. He emphasizes the importance of aligning AI development with human values and ethical principles to ensure that the benefits of AI are equitably distributed and that potential risks are mitigated.

The motivations behind AI research and applications are driven by a combination of intellectual curiosity, and practical problem-solving. As AI continues to evolve, it is essential to navigate the challenges and opportunities it presents with a focus on creating a positive impact on society.

Definition of AI

AI can be defined as the simulation of human intelligence in machines that are programmed to think and learn like humans. It encompasses a range of techniques, including machine learning, deep learning, natural language processing, and computer vision.

AI systems can be classified into two main categories: narrow (weak) AI and general (strong) AI. Narrow AI is designed to perform a specific task or a limited set of tasks. For example, virtual assistants like Siri or Alexa, which are programmed to understand and respond to voice commands, or recommendation systems used by platforms like Netflix or Amazon to suggest content or products based on user preferences. These systems operate within a predefined range of functions and do not possess the ability to perform tasks outside their programmed domain.

On the other hand, general AI refers to a hypothetical AI system that can understand, learn, and apply knowledge in various contexts, much like a human being. General AI would have the ability to reason abstractly, plan strategically, learn from experience, and adapt to new situations. While general AI remains a theoretical concept, its development would represent a significant leap forward in AI research. An example of a step toward general AI is OpenAI’s GPT-3, as it can generate human-like text and perform a wide range of language tasks, although it still lacks the full range of human cognitive abilities.

Overview of AI Applications and Impact on Society

AI’s impact on society is revolutionizing industries and reshaping our daily lives. Let’s mention a few of them:

Image by author

Healthcare: AI is revolutionizing various aspects of patient care and medical research. AI algorithms can analyze medical images, such as X-rays and MRIs, with high precision, aiding in the early detection of conditions like cancer, fractures, and neurological disorders. This capability not only improves diagnostic accuracy but also speeds up the process, allowing for timely interventions. AI-powered predictive models can analyze patient data to forecast the progression of diseases and the likelihood of specific outcomes. This information is invaluable for clinicians in making informed decisions about treatment strategies and managing patient care proactively. However, it also necessitates careful consideration of ethical issues, such as data privacy, algorithmic bias, and the implications of AI-driven decisions on patient care.

Autonomous vehicles (AVs): represent a significant application of AI in transforming the transportation sector. Equipped with a combination of sensors, cameras, and advanced AI algorithms, AVs can perceive their environment, make real-time decisions, and navigate without human intervention. This capability is expected to lead to a reduction in traffic accidents, as many collisions are caused by human error. By removing this factor, AVs have the potential to significantly enhance road safety. However, the widespread adoption of autonomous vehicles also presents challenges, including technological, regulatory, and ethical considerations. Ensuring the safety and reliability of AVs, addressing liability in the event of accidents, and managing the transition period where human-driven and autonomous vehicles coexist are all critical issues that need to be addressed.

Education: AI-powered tools are transforming the learning experience by providing personalized and adaptive learning environments. These tools can analyze students’ learning styles, preferences, and performance to tailor educational content and instructional strategies to their needs. However, the integration of AI in education also raises concerns regarding data privacy, the digital divide, and the need for educators to be trained in using AI tools effectively. Addressing these challenges is crucial to ensure that AI technologies are harnessed ethically and equitably to enhance educational outcomes.

Cybersecurity: AI is playing an increasingly vital role in safeguarding digital assets and infrastructure. AI-powered systems can analyze vast amounts of data in real-time, detecting and responding to potential threats with speed and accuracy that far surpasses traditional methods. The arms race between cyber defenders and attackers is intensified as malicious actors leverage AI for sophisticated attacks. Thus, ethical considerations and robust security measures are essential to ensure that AI is used responsibly and effectively in the realm of cybersecurity.

Climate change: AI is emerging as a powerful tool for promoting sustainability and mitigating environmental impacts. By leveraging AI technologies, we can optimize energy consumption, enhance resource management, and develop innovative solutions to reduce greenhouse gas emissions. AI-driven systems can analyze vast amounts of data to identify patterns and trends in energy usage, enabling the implementation of energy-saving measures in various sectors, including transportation, manufacturing, and buildings. For instance, smart grid technologies use AI to balance supply and demand, integrate renewable energy sources, and improve the efficiency of electricity distribution. In agriculture, AI can optimize irrigation and fertilization practices, reducing water consumption and minimizing the use of chemicals. Precision agriculture techniques, powered by AI, help farmers make data-driven decisions, leading to increased crop yields with lower environmental footprints. It is essential to consider the environmental impacts of AI technologies themselves, such as energy consumption and electronic waste. In a recent study, researchers discovered that training the GPT-3 model consumed a staggering 1,287 gigawatt hours of electricity, equivalent to the annual energy usage of 120 U.S. households. Additionally, this process resulted in the production of 552 tons of carbon emissions, comparable to the annual emissions from 110 gasoline-powered vehicles on U.S. roads.

Finance: Fraud detection is a critical application, where AI algorithms analyze transaction patterns to identify unusual behavior indicative of fraudulent activities, helping financial institutions protect their customers’ assets. Risk management is another area where AI excels, assessing and predicting risks associated with investments and financial products by analyzing vast amounts of data to provide insights into potential market fluctuations. Algorithmic trading leverages AI-powered systems to execute trades at high speeds based on predefined criteria, analyzing market data in real-time. In credit scoring, AI models analyze a wide range of data, including non-traditional sources, to assess creditworthiness more accurately and inclusively. Portfolio management benefits from AI-driven robo-advisors that offer personalized investment advice and optimize investment strategies based on individual risk tolerance and financial goals.

The integration of AI into various facets of life raises ethical and societal concerns. Issues such as privacy and security are at the forefront. Ensuring responsible AI development is crucial, requiring collaboration among researchers, policymakers, and industry stakeholders to leverage AI’s benefits while addressing its risks. As AI continues to evolve, it is imperative to navigate its ethical implications, including privacy, bias, and accountability. The future of work is also a significant consideration, as AI reshapes job landscapes. Balancing innovation with ethical responsibility will be key in harnessing AI’s potential to benefit society as a whole.

Understanding Machine Learning

Machine Learning (ML) is a subset of AI that allows computers to learn from data and improve their performance over time without being explicitly programmed. It involves the development of algorithms that enable computers to identify patterns and make decisions based on the data they have been exposed to. The formal definition of Machine Learning states that a computer program is said to learn from experience E concerning some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E. This definition emphasizes the adaptive nature of Machine Learning, where the system’s ability to perform tasks improves with exposure to relevant data.

One common analogy to explain Machine Learning is to consider how a child learns to ride a bike. Initially, the child may not know how to balance, pedal, or steer. However, through repeated attempts and learning from mistakes, the child gradually improves and eventually masters the skill of bike riding. Similarly, in Machine Learning, a computer system is exposed to a large amount of data (e.g., images, texts, or numerical values) and learns to recognize patterns or make predictions based on this data. The concept of Machine Learning dates back to the mid-20th century, with significant advancements occurring in the 1990s due to increased computational power and data availability. Today, Machine Learning is a driving force behind numerous applications, from recommendation systems and natural language processing to autonomous vehicles and medical diagnosis.

Image by author

Types of ML

Supervised Learning
Supervised learning is a type of machine learning where the algorithm is trained on a labeled dataset. This means that each example in the training set is paired with the correct output. The algorithm makes predictions based on the input data and is corrected by the teacher (the labeled data) when its predictions are wrong. Over time, the algorithm adjusts its parameters to minimize errors, thereby learning the mapping from inputs to outputs. This process continues until the model achieves the desired level of accuracy on the training data. In supervised learning, there are two main types of tasks: classification and regression.
Classification involves categorizing input data into predefined classes or labels. For example, email spam detection is a classification task where the algorithm is trained on a dataset of emails labeled as “spam” or “not spam.” The model learns to identify features that distinguish spam emails from non-spam emails and uses this knowledge to classify new emails. Other examples of classification tasks include image recognition (e.g., identifying objects in images) and medical diagnosis (e.g., determining whether a patient has a particular disease).

Regression involves predicting a continuous value based on the input data. For instance, predicting the price of a house based on features such as its size, location, and number of bedrooms is a regression task. The model learns the relationship between the features and the target value (house price) and uses this to predict the price of new houses.
Supervised learning is widely used in applications such as image recognition, speech recognition, and medical diagnosis, where the goal is to predict a specific outcome based on input data

Image by author

Unsupervised Learning
Unsupervised learning, in contrast to supervised learning, involves training an algorithm on a dataset without any labels. The goal is to discover hidden patterns or structures in the data. Since there are no correct answers or labels provided, the algorithm must learn to identify these patterns on its own. Common techniques in unsupervised learning are clustering and dimensionality reduction.

Clustering involves algorithms that group similar data points together. For example, in market segmentation, a company might use clustering to group customers based on their purchasing behavior, without any prior labeling of the data. Dimensionality reduction involves reducing the number of variables in the data while retaining as much information as possible. This is useful in data visualization and in simplifying complex datasets for further analysis.


Unsupervised learning is particularly valuable in exploratory data analysis, where the goal is to uncover insights from the data rather than predict a specific outcome.

Reinforcement Learning
Reinforcement learning (RL) is a domain where an agent learns to make decisions by interacting with its environment. The agent’s learning is driven by feedback, receiving rewards or penalties for the actions it takes

The agent decides on actions based on the observed state of the environment, with its decisions being shaped by rewards or penalties received as feedback to optimize future actions. The state represents the current situation or context within the environment that the agent must evaluate to make informed decisions.

The objective of RL is for the agent to develop a strategy, known as a policy, that will earn it the maximum possible reward over time. Unlike being supplied with correct answers (supervised) or searching for hidden patterns in unlabeled data (unsupervised), RL is learning from the outcomes of actions taken, akin to a trial-and-error approach. The techniques in reinforcement learning can be broadly classified into 3 main categories:

  1. Value-Based Methods: These methods involve the estimation of the value for each state or action to determine the optimal policy. An example is Q-learning, an algorithm where the agent learns the quality, or “Q-value,” of taking certain actions in given states.
  2. Policy-Based Methods: Involve directly learning the policy that decides which action to take in a given state. An example is the policy gradient method, wherein the agent tweaks its policy parameters to maximize the expected reward.
  3. Model-Based Methods: These methods involve constructing a model of the environment that can be used for planning and decision-making. For instance, Dyna-Q \cite{sutton1990integrated} integrates Q-learning with a model of the environment, enhancing the learning process through planning.

RL has a wide range of applications, from game-playing AI such as AlphaGo, which learns complex strategies in the game of Go, to robotics, where robots learn to navigate and perform tasks, and even to recommendation systems that learn to suggest products or content that will engage users. The strength of reinforcement learning lies in its ability to solve complex problems where the solution is not explicitly known and must be discovered through interaction with the environment.

Role of Data in AI and ML

Data is the lifeblood of AI and ML. It is the raw material that fuels the algorithms and models that drive these technologies. Data provides the information that AI systems need to learn, make decisions, and adapt to new situations. Without data, these technologies would be unable to function. In AI and ML, data can be categorized into three main types: structured, unstructured, and semi-structured.

Structured Data: This type of data is highly organized and formatted in a way that is easily searchable by simple algorithms. It is often stored in databases or spreadsheets and includes data types such as numbers, dates, and strings. Examples of structured data used in AI and ML include customer databases, sales transactions, and sensor data.

Unstructured Data: This type of data is unorganized and does not follow a specific format. It includes text, images, videos, and audio. Unstructured data is more complex and requires advanced processing techniques to extract meaningful information. In AI and ML, unstructured data is used in natural language processing, computer vision, and speech recognition.

Semi-Structured Data: This type of data falls somewhere between structured and unstructured data. It is not as organized as structured data but contains some level of structure. Examples of semi-structured data include JSON files, XML files, and email messages.

Data is used in AI and ML to train and evaluate machine learning models. The process typically involves 3 distinct datasets: training, validation, and test datasets

  1. Training Set: This is the primary dataset used to teach the model. It contains a large volume of examples, each consisting of input data and the corresponding output (label or target). By processing this dataset, the model learns to recognize patterns and relationships that are indicative of the underlying problem it is trying to solve. For example, in a model designed to identify spam emails, the training dataset would consist of numerous emails (examples), each labeled as either “spam” or “not spam.”
  2. Validation Set: The validation dataset is used to fine-tune the model’s parameters and prevent overfitting. Overfitting occurs when a model learns the training data too well, capturing noise or random fluctuations instead of the actual signal. The validation dataset provides a way to check the model’s performance on unseen data during the training process. It is used to adjust hyperparameters (such as the learning rate or the number of layers in a neural network model) and to select the best version of the model.
  3. Test Set: Once the model has been trained and fine-tuned, the test dataset is used to evaluate its performance. This dataset is separate from the training and validation datasets and is not used during the training process. It provides an unbiased assessment of how well the model generalizes to new, unseen data. Statistical metrics such as accuracy, precision, recall, and F1 score are often used to quantify the model’s performance on the test dataset.

To summarize, the training dataset, which consists of numerous examples and their corresponding labels, is used to teach the model. The validation dataset helps fine-tune the model and prevent overfitting, while the test dataset provides an unbiased evaluation of the model’s performance on unseen data.

Train and evaluating machine learning models involves three distinct datasets: training, validation, and test datasets (Image by author)

Deep Learning

Deep Learning (DL) a subset of ML, has revolutionized many fields by providing advanced tools for analyzing complex data. It involves training deep neural networks (DNNs) with multiple layers to recognize patterns in data. These networks, inspired by the structure and function of the human brain, consist of interconnected nodes called neurons. Each neuron processes input data and passes the result to the next layer. This hierarchical structure enables neural networks to learn from data in a way that mimics human cognition \cite{goodfellow2016deep}. The architecture of a DNN, also synonymous with DL, primarily consists of:
Layers: The building blocks of a neural network, including input, hidden, and output layers.
Neurons: Individual processing units within each layer that apply weights and biases to the input data.
Activation Functions: Functions that determine the output of each neuron, such as Sigmoid, which introduce non-linearity into the network.

Image by author

This layered architecture allows DNNs to learn complex patterns and relationships in data, making them particularly effective for tasks such as image recognition, natural language processing, and anomaly detection in cybersecurity. One of the key advantages of Deep Learning is its ability to automatically learn and extract pertinent features from raw data, eliminating the need for manual feature engineering. Furthermore, DNNs are capable of processing and analyzing high-dimensional data, such as images or network traffic logs, with greater efficiency. However, it’s important to note that training a Deep Learning model requires a substantial amount of data to achieve accurate results. Deep Learning models are versatile and can address a wide range of problems, including supervised, unsupervised, and reinforcement learning tasks, making them a powerful tool in the machine learning arsenal.

An example of deep learning architecture (Image by author)

Summary

Artificial Intelligence (AI) has traversed a remarkable journey from ancient myths to cutting-edge technology, profoundly impacting various sectors of society. The evolution of AI, marked by significant milestones and breakthroughs, reflects the human quest to replicate intelligence and solve complex problems. With advancements in machine learning and deep learning, AI continues to revolutionize industries, from healthcare to finance, while raising important ethical considerations.

The future of AI promises even greater possibilities, as researchers strive to develop more sophisticated models and applications. However, it is crucial to ensure that the development of AI aligns with ethical standards and societal values, addressing challenges such as privacy, security, and the impact on employment. As we navigate this evolving landscape, the potential of AI to enhance human life and solve pressing global issues remains vast, making it an exciting and pivotal area of study and innovation.

About the Author

Dr. Barak Or is a professional in the field of artificial intelligence and sensor fusion. He is a researcher, lecturer, and entrepreneur who has published numerous patents and articles in professional journals. ​Dr. Or leads the MetaOr Artificial Intelligence firm. He founded ALMA Tech. LTD holds patents in the field of AI and navigation. He has worked with Qualcomm as DSP and machine learning algorithms expert. He completed his Ph.D. in machine learning for sensor fusion at the University of Haifa, Israel. He holds M.Sc. (2018) and B.Sc. (2016) degrees in Aerospace Engineering and B.A. in Economics and Management (2016, Cum Laude) from the Technion, Israel Institute of Technology. He has received several prizes and research grants from the Israel Innovation Authority, the Israeli Ministry of Defence, and the Israeli Ministry of Economic and Industry. In 2021, he was nominated by the Technion for “graduate achievements” in the field of High-tech.

Bibliographic

[1] B. A. Toole et al., Ada, the enchantress of numbers: Prophet of the
computer age, a pathway to the 21st century. Critical Connection,
1998.
[2] A. M. Turing, Computing machinery and intelligence. Springer, 2009.
[3] J. McCarthy, M. L. Minsky, N. Rochester, and C. E. Shannon, “A
proposal for the dartmouth summer research project on artificial in-
telligence, august 31, 1955,” AI magazine, vol. 27, no. 4, pp. 12–12,
2006.
[4] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press,
2016.
[5] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol.
521, no. 7553, pp. 436–444, 2015.
[6] W. S. McCulloch and W. Pitts, “A logical calculus of the ideas im-
manent in nervous activity,” The bulletin of mathematical biophysics,
vol. 5, pp. 115–133, 1943.
[7] M. Campbell, A. J. Hoane Jr, and F.-h. Hsu, “Deep blue,” Artificial
intelligence, vol. 134, no. 1–2, pp. 57–83, 2002.
[8] I. A. T. Hashem, I. Yaqoob, N. B. Anuar, S. Mokhtar, A. Gani, and
S. U. Khan, “The rise of “big data” on cloud computing: Review and
open research issues,” Information systems, vol. 47, pp. 98–115, 2015.
[9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification
with deep convolutional neural networks,” Advances in neural infor-
mation processing systems, vol. 25, 2012.
[10] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet
large scale visual recognition challenge,” International journal of com-
puter vision, vol. 115, pp. 211–252, 2015.
[11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N.
Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” Ad-
vances in neural information processing systems, vol. 30, 2017

[12] K. Chowdhary and K. Chowdhary, “Natural language processing,”
Fundamentals of artificial intelligence, pp. 603–649, 2020.
[13] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal,
A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language mod-
els are few-shot learners,” Advances in neural information processing
systems, vol. 33, pp. 1877–1901, 2020.
[14] A. W. Senior, R. Evans, J. Jumper, J. Kirkpatrick, L. Sifre, T. Green,
C. Qin, A. ˇZ ́ıdek, A. W. Nelson, A. Bridgland et al., “Improved protein
structure prediction using potentials from deep learning,” Nature, vol.
577, no. 7792, pp. 706–710, 2020.
[15] S. J. Russell and P. Norvig, Artificial intelligence a modern approach.
London, 2010.
[16] M. I. Jordan and T. M. Mitchell, “Machine learning: Trends, perspec-
tives, and prospects,” Science, vol. 349, no. 6245, pp. 255–260, 2015.
[17] D. J. Fagnant and K. Kockelman, “Preparing a nation for autonomous
vehicles: opportunities, barriers and policy recommendations,” Trans-
portation Research Part A: Policy and Practice, vol. 77, pp. 167–181,
2015.
[18] J. W. Goodell, S. Kumar, W. M. Lim, and D. Pattnaik, “Artificial
intelligence and machine learning in finance: Identifying foundations,
themes, and research clusters from bibliometric analysis,” Journal of
Behavioral and Experimental Finance, vol. 32, p. 100577, 2021.
[19] A. Agrawal, J. S. Gans, and A. Goldfarb, “Exploring the impact of
artificial intelligence: Prediction versus judgment,” Information Eco-
nomics and Policy, vol. 47, pp. 1–6, 2019.
[20] H. W. Lin, M. Tegmark, and D. Rolnick, “Why does deep and cheap
learning work so well?” Journal of Statistical Physics, vol. 168, pp.
1223–1247, 2017.
[21] M. Tegmark, Life 3.0: Being human in the age of artificial intelligence.
Vintage, 2018.
[22] J. R. Searle, “Minds, brains, and programs,” Behavioral and brain
sciences, vol. 3, no. 3, pp. 417–424, 1980.

[23] F. Jiang, Y. Jiang, H. Zhi, Y. Dong, H. Li, S. Ma, Y. Wang, Q. Dong,
H. Shen, and Y. Wang, “Artificial intelligence in healthcare: past,
present and future,” Stroke and vascular neurology, vol. 2, no. 4, 2017.
[24] J. Beck, M. Stern, and E. Haugsjaa, “Applications of ai in education,”
XRDS: Crossroads, The ACM Magazine for Students, vol. 3, no. 1, pp.
11–15, 1996.
[25] G. Apruzzese, M. Colajanni, L. Ferretti, and M. Marchetti, “Ad-
dressing adversarial attacks against security systems based on machine
learning,” in 2019 11th international conference on cyber conflict (Cy-
Con), vol. 900. IEEE, 2019, pp. 1–18.
[26] C. Zhang, J. Wu, C. Long, and M. Cheng, “Review of existing peer-
to-peer energy trading projects,” Energy Procedia, vol. 105, pp. 2563–
2568, 2017.
[27] M. S. Norouzzadeh, A. Nguyen, M. Kosmala, A. Swanson, M. S.
Palmer, C. Packer, and J. Clune, “Automatically identifying, counting,
and describing wild animals in camera-trap images with deep learn-
ing,” Proceedings of the National Academy of Sciences, vol. 115, no. 25,
pp. E5716–E5725, 2018.
[28] E. Kojola, “Chatco2 — safeguards needed for ai’s climate risks,” Green-
peace, p. November 30, 2023.
[29] V. Dhar, “Data science and prediction,” Communications of the ACM,
vol. 56, no. 12, pp. 64–73, 2013.
[30] A. L. Samuel, “Some studies in machine learning using the game of
checkers,” IBM Journal of research and development, vol. 3, no. 3, pp.
210–229, 1959.
[31] T. M. Mitchell, “Machine learning,” 1997.
[32] C. Bishop, “Pattern recognition and machine learning,” Springer
google schola, vol. 2, pp. 5–43, 2006.
[33] K. P. Murphy, Machine learning: a probabilistic perspective. MIT
press, 2012.
[34] T. Hastie, R. Tibshirani, J. Friedman, T. Hastie, R. Tibshirani, and
J. Friedman, “Unsupervised learning,” The elements of statistical
learning: Data mining, inference, and prediction, pp. 485–585, 2009.

[35] C. J. Watkins and P. Dayan, “Q-learning,” Machine learning, vol. 8,
pp. 279–292, 1992.
[36] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour, “Policy gradi-
ent methods for reinforcement learning with function approximation,”
Advances in neural information processing systems, vol. 12, 1999.
[37] R. S. Sutton, “Integrated architectures for learning, planning, and re-
acting based on approximating dynamic programming,” in Machine
learning proceedings 1990. Elsevier, 1990, pp. 216–224.
[38] F.-Y. Wang, J. J. Zhang, X. Zheng, X. Wang, Y. Yuan, X. Dai,
J. Zhang, and L. Yang, “Where does alphago go: From church-turing
thesis to alphago thesis and beyond,” IEEE/CAA Journal of Auto-
matica Sinica, vol. 3, no. 2, pp. 113–120, 2016.
[39] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduc-
tion. MIT press, 2018.
[40] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang,
A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton et al., “Mastering
the game of go without human knowledge,” nature, vol. 550, no. 7676,
pp. 354–359, 2017.
[41] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G.
Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski
et al., “Human-level control through deep reinforcement learning,” na-
ture, vol. 518, no. 7540, pp. 529–533, 2015.
[42] P. Domingos, “A few useful things to know about machine learning,”
Communications of the ACM, vol. 55, no. 10, pp. 78–87, 2012

--

--

metaor.ai
metaor.ai

Published in metaor.ai

Empowering Enterprises with Artificial Intelligence

Dr Barak Or
Dr Barak Or

Written by Dr Barak Or

Google and Reichman Tech School. AI Entrepreneur, Researcher, and Lecturer

Responses (1)