AI — what artificial intelligence really is, explained in detail through science

Marcin Lukowicz
10 min readApr 29, 2024

--

In this article, I am providing an expert explanation of all things artificial intelligence based on my acdemic literature review on the topic. By reading it you will learn what artificial intelligence is, how its definitions differ, when AI’s history starts, how we categorise AI techniques and where they are currenlty applied.

What is Artificail Intelligence (AI)? [Definitions]

Artificial Intelligence doesn’t have a precise, universally accepted definition. This is due to the fact that “intelligence” itself or “intelligent human behaviour” are not well defined and understood just yet, and that AI perception changes as general public gets used to its previous advances- Tesler’s theorem states that “AI is whatever hasn’t been done” (Chui et al., 2017).

The very term was coined in 1955 by John McCarthy whose research assumed that all aspects of intelligence can be described so precisely that can be simulated by machines (Wisskirchen et al., 2017).

As of today, AI is an interdisciplinary field involving researchers from subjects like computer science, neuroscience, mathematics, linguistics, psychology, and philosophy.

Among literature that are 3 approaches to its definition:

First, AI is a set of advanced technologies, sometimes called artificial systems, that enable machines, sometimes called intelligent agents, to perform highly complex tasks with the aim to solve problems encountered by humans in a most effective manner. Such system could be made of both hardware and software, being a robot or a programme run on a single or network of computers (Wisskirchen et al., 2017; Chui et al., 2017).

Second, AI is a branch of computer science investigating different problem areas like for example speech recognition, robotics, image and video processing by developing computer programs exhibiting intelligent behaviour (Paschek et al., 2017).

Third, AI is a computer-based analytical work process of machines that aims to replicate or surpass in abilities/ perform tasks considered to require intelligence if performed by humans. (tasks normally requiring human intel (reproduce human like behaviour)). Thus it is the ability to exhibit human-like intelligence (intelligent behaviour) by machines, which could include visual perception, speech recognition, translation between languages, learning and adaptation, reasoning and planning, sensory understanding and interaction, creativity, autonomy, extracting knowledge, predictions and decision-making (Scherer, 2015; Ransbotham et al., 2017; Liao et al., 2015; Tien, 2017; Hall and Pesenti, 2017).

Furthermore, there are 3 contextual distinctions of AI (broad categories):

  1. Narrow or Weak or Modular AI: is the most limited form of AI, currently (2018) achieved by a wide array of applications. Here the program merely simulates intelligence by investigating cognitive processes. AI has a narrow specific expertise in one domain in which it performs its task brilliantly, at an expert or sometimes superhuman level thanks to a combination of advanced algorithms, deep learning and other techniques. Although it is able to learn and improve its performance through practice, it is incapable of carrying out any other tasks outside its specified purpose/ domain (Wisskirchen et al., 2017; Chui et al., 2017; Ayoub and Payne, 2016; Deloitte, 2016).
  2. General or Strong or Human-Level AI: is a much more advanced form of AI, which is expected by many experts to be developed/reached/achieved by around 2040. Here the computer program is involved in self-learning intellectual processes, comparable to human-level intelligence and cognitive abilities, through which it understands and is able to optimise its own behaviour based on the previous experiences (former) and objectives. General AI would be able to perform, any intellectual task that a human could do arguably to the same level of standard. It would have a broad expertise being able to perform a wide range of tasks and apply the knowledge in a much more flexible way than Narrow AI, to solve unfamiliar, more abstract and unbounded problems, requiring an understanding of meaning and values, without being previously trained to do so (Deloitte, 2016; Ayoub and Payne, 2016; Kalanov, 2016). Such a form would require not only sophisticated hardware being able to process circa 10 quadrillion calculations per second with only 20 watts of power but also highly complex software of trillions of simulated “neural” connections, which doesn’t exist yet and pose the greatest challenge. One solution suggested is to build AI with the ability to code changes into itself, thus with exponential speed becoming smarter (Deloitte, 2016).
  3. Superintelligence or Singularity: is an AI at the level that is more advanced than human intelligence, possibly billions of times than the smartest individual. It can be achieved in a short period of time after General AI’s height due to its continuous learning and improvement process. As of now, it is a mere speculation of what hypothetically development of such an AI would result in. Human extinction, secret to immortality, end to global warming or creation of an utopian world are just some of many propositions (Deloitte, 2016; Brundage, 2015).

AI History:

Current state of Artificial Intelligence is a sum of continuous thought- provoking discussions, versatile multidisciplinary research, and improvements through trial and error over the span of few decades. The overall evolution of this discipline can be divided into 4 periods: Pre-Computer (before 1950), Beginnings (1950- 1970s), Winters (1970s-2000s), and Reawakening (2000s- onwards), which I cover in detail in my article “History of Artifical Intelligence (AI) — defining key milestones from 4th century B.C. to 2017”. From the public awareness point of view, the Table 1 below summarises major contributions to AI field.

Table 1 AI History Periods and Major Contributions

Looking at the past we can say that history of AI is in a way history of knowledge representation and game competitions against humans. The latter was a big driver for the research progress and achievements of Deep Blue’s, Watson’s and Alpha Go’s represent the evolution of approaches toward machine problem solving, marking major milestones in the AI progress.

AI Types, Categories, Technology Division

Since AI research is a very broad involving multiple disciplines field it had grown into many different branches.

Hover, the consistent variable among all disciplines is the requirement for big amounts of data. Hence some of the techniques of AI relate to processing information from the external world like computer vision and language processing, others involve learning from information like machine learning, and finally some mostly act on information like robotics, autonomous vehicles and virtual agents (Chui et al., 2017).

Broadly speaking there are 2 ways AI systems work: it can be symbolic based and non-symbolic, data based (Guo and Wong, 2013). The advocates of the former, strived to create an intelligent machine based on knowledge and rules, by developing knowledge based systems, while the supporters of the latter, fascinated about how the human brain works, focused on developing a number of nature-inspired computational approaches of intelligent systems.

Symbolic or Classical artificial intelligence, also known as Good, Old-Fashioned AI (GOFAI), is an approach in which problems, rules, logic and search are represented with high level symbols.

The traditional AI models focused exclusively on symbolic reasoning and logic, using symbolic manipulation as explained by Newell and Simon (2007). Instead of computing numbers, and letters, the computers would process long range of symbols, that were used to represent real world concepts and then structured in a certain way, like for example lists or hierarchies, to aggregate information and see (present) their interrelations (Sun, 1999). As such, Symbolic AI focuses on narrow definition of intelligence namely abstract reasoning and works well on static well-defined problems. One of those was winning a match against chess grandmaster Gary Kasparov in 1997 when IBM’s Deep Blue used “brute force” approach to analyse all possible game scenarios and pick the best moves (IBM, 2011).

Symbolic AI was the most prominent paradigm in the AI community until the late 1980s and implementations of such symbolic reasoning called expert systems, were foundations for the very early AI systems with practical applications (see my article “AI History 4th century B.C. to 2017”).

The techniques under symbolic AI are called Knowledge based systems (KBS). They derived from the idea that both implicit and explicit knowledge is made from data and information which can be organised for machines to recognize (Guo and Wong, 2013). Knowledge based systems are tools for making applications which based on their pre-programmed knowledge specific to a given problem domain, generate logical inferences and propose solutions (Hembry, 1990) to support human decision making. They are divided into 2 types: expert systems, built from very accurate collection of human expert knowledge aimed to simulate their decision making abilities, and case-based reasoning systems, programmed with the knowledge about solutions to similar past problems (Guo and Wong, 2013).

Although it is much less common theme, at the present, some computers still process information in a symbolic way in the fields of Computer Vision and Robotics. The former focuses on developing an ability to “see” by machines the outside world, and is closely related to image processing which allows to recognize the scenery and differentiate specific objects. The Latter, allows the machine to understand their environment and move around easily.

Non-Symbolic or Data-based or Connectionist or Computational Artificial intelligence, in performing calculations, do not use symbolic manipulation but rather follows principles that has shown ability to solve problems (Bhatia, 2017). Those principles are drawn onto the artificial neural networks that focus on the ability to recognize patterns and perform generalisation. These networks use data representations and often undergo a process of training called “machine learning”. Non-symbolic AI originated as a trial to mimic complex network of interconnected neurons in human brain. This approach was used in building IBM’s Watson that defeated human champions in Jeopardy game in 2011, as well as Deep Mind’s Alpha Go that won against Ke Jie, number 1 player in Go (High, 2012).

There are 3 important AI technique types under non-symbolic approach.

Fuzzy logic is a system of concepts used to handle imprecise and incomplete information by conducting approximate reasoning incorporating degrees of truth in an attempt to simulate human reasoning and generate precise deductions (Guo and Wong, 2013).

Evolutionary computation describes a set of evolutionary optimization techniques which model evolution process on a machine to improve its solutions quality continuously until an optimal one is achieved (Guo and Wong, 2013).

Machine learning is a set of mathematical techniques that give the machine an ability to generate knowledge from its own experience, simulating humans cognitive strength of pattern recognition and learning (Paschek et al., 2017; Agrawal et al., 20XX; Ransbothan, 2016; Kuang, 2017; ICAEW, 2017). It describes a set of algorithms which are developed based on empirical and training data with the aim to discover patterns- connections between observations and outcomes- in large datasets, ideally without or with minor human involvement. The focus is on the results optimization as well as continuous improvement of machine’s prediction quality/capability throughout the process of learning. The implementation of those algorithms can take place with supervised learning, where experts train the algorithm by providing data that includes the answer to the problem, Unsupervised learning, when machine is given data and tasked to understand/find out patterns in it, and lastly reinforced learning where ai system achieves its goal through trial and error efforts (Tien, 2017). The outputs of such continuously self-improving Machine Learning AI models that learn from its own experience, are highly sophisticated data-based predictions and suggestions.

Even though Machine learning consists encompasses wide range of technique (See Table 2), the one that received the most widespread media coverage are neural networks.

Table 2 Different Approaches to Machine Learning. Compiled from (Tien, 2017)

Artificial Neural Network (NN) are computational models which consist of a number of interconnected nodes called “neurons” inspired by brain’s biological network of neural. Usually those systems are able to adapt to a given scenario and adjust its settings in line with machines learning experience, to ultimately find patterns between inputs and associated outputs (Guo and Wong, 2013).

Out of the number of different neural networks widely used in AI applications (e.g. Feedforward neural networks, Backpropagation network, recurrent neural networks, convulsion neural network, Long-short term memory), the form that plays increasing important role is that used in deep learning.

Deep learning is one type of machine learning, that uses deep neural networks structured in several layers to reach especially efficient learning sequences. It also finds correlations between inputs and outputs but uses hierarchical level of its networks learning sequentially from low-level elements to high-level elements, which reduces the need for many neural network operations. This way it can figure out how to identify given data completely on its own (Paschek et al., 2017).

By far most of the current AI applications fall under the non-symbolic computation and especially under machine learning.

In addition to machine vision and robotics, there is Speech recognition discipline that gives programs the ability to communicate in different languages by simulating speaking and listening by processing audio, and also Natural Language Processing, which allows artificial intelligence to read and understand the content and context of text natural language data.

The presented typology of AI is summarised in Table 3 below.

Table 3 AI Approaches Categories

Note from the author

Please note that the contents of this article are excerpts from my scientific paper “To be or not to be — linking Artificial Intelligence with Strategic Decision Making — the Analyses of United Kingdom’s AI scene” originally published in April 2018.

How to reference this article:

Lukowicz, Marcin. (2024). AI — what artificial intelligence really is, explained in detail through science. [online] Available at: https://medium.com/@marcin.lukowicz/ai-what-artificial-intelligence-really-is-explained-in-detail-through-science-f2ebb32e7188.

--

--

Marcin Lukowicz

Business Strategy and Innovation Advisor, Biohacker, DeepTech enthusiast.