AI studies the laws of intelligence, like physics studies the laws of nature.
A more accurate definition of AI will bring a better understanding of its role & impact.
With the many definitions of AI around — who needs an extra one? I think we do: current definitions introduce many confusions by incorrectly framing Artificial Intelligence. At the Artificial Intelligence Lab Brussels, the university research centre founded in 1983 where I work, we typically define AI as follows:
Artificial Intelligence is a scientific field that (1) studies the nature and mechanisms of intelligence, (2) formalises its findings using mathematics and (3) implements it using computer science.
We thus identify three main ingredients in AI:
- The first pillar is philosophical — attempting to understand what intelligence is and how it manifests itself.
- The second pillar is formalisation — describing intelligent behaviour with mathematical symbols: “explaining” it to a computer, you could say. The design of algorithms belongs in this category.
- The third and last pillar is engineering or computer science — how to implement the algorithms and build systems that behave intelligently.
Picture the three levels or pillars as baking a cake. At the philosophical level, you need to know what ingredients go well together and define a cake concept. At the formalisation phase, you describe the cake more rigorously using a recipe (an algorithm in computer science terms). Finally, making and baking it — the real test, let’s say — is the implementation part.
The essential breakthroughs happen at the philosophical level. They often involve insights from many disciplines, ranging from sociology, biology, sociology or mathematics to neuroscience. AI lends from but also contributes significantly to other scientific domains. AI is thus a truly interdisciplinary field.
Most academic research is done at the mathematical level. Researchers at conferences present new algorithms to solve a new kind of task or perform existing tasks more efficiently.
Of course, no real system can be built without innovations at the computer science level. Object-oriented programming, for example, was developed partly in the context of AI: researchers were looking for ways to represent reality with all its relations and complexity, using abstract data types.
Finally, computing hardware plays an equally important role, as it is the platform on which the algorithms run. Though chess playing software was invented in 1950 by Shannon, it took up to 1997 before computers had enough memory and computing power to beat the world champion.
The importance of recognizing these three levels cannot be understated: a good understanding at all levels is necessary to assess AI’s impact and design systems that are trustworthy and robust. For example, some things may be easy to do for humans but very hard for computers (take: commonsense reasoning, one of the major limiting factors in current AI systems). Knowledge at implementation levels is thus needed to understand the limits of AI systems.
However, this is not enough! A good understanding of the mathematics behind the algorithms that AI designers use, and the philosophy behind them, is crucial to understand the impact once these systems are deployed in real-world scenarios. Failure to do so has already led to many unwanted side effects, including bias.
This is one of the reasons why AI is a domain that is hard to grasp and difficult to implement: it requires its practitioners to possess a broad skill set, from abstract reasoning at a philosophical level, over mathematical ones to hard coding skills. And, of course, knowledge of the domain one applies AI to and the potential ethical & social impact!
Furthermore, AI is not a fixed, single concept or a technology. It is a toolbox of concepts, mental models, techniques, software and methodologies. It is sometimes called a “general-purpose technology”. A helpful metaphor to keep in mind is that of an electrical motor. It is the heart of many appliances like hairdryers, mixers, beard trimmers, drills, refrigerators or cars. It does not serve a single purpose and has no clue what function it is performing.
Algorithms play a similar role in computer systems, though they manipulate data structures rather than mechanical structures. They can thus be compared to the machines on a production line, adding value to the product.
The fact that these numbers can represent anything — gender, salary, psychological trait, pixel, email — makes them extremely powerful.
AI studies intelligence in principle, which means that we are not trying to reproduce human intelligence per se: rather than trying to build a replica of a bird, we are interested in understanding flying. This means writing down the principles of flight mathematically, to then building a flying machine.
We thus see that AI has two primary purposes:
- creating a better understanding of “intelligence, using computer science and mathematics as research tools;
- building intelligent systems.
A small note on the word “intelligence” that bothers quite some people (including me, originally). One should not take this word too literal. To quote Edsger Dijkstra, one of the founding fathers of Computer Science:
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” — Edsger Dijkstra
Indeed, in the research community, with “intelligence”, we typically refer to tasks that cannot be solved yet with traditional techniques or have properties that we attribute to intelligent beings. It is a moving target — once a task is “cracked” by an algorithm (e.g. route planning or playing chess), it is typically no longer called AI (though it is).
Actually, the best way to understand the “I” in AI is to think of it as the quest shared by its researchers to come to a better understanding of the world. It can best be understood as a homage to (human/animal/natural) intelligence, acknowledging the world’s complexity and mysteries.
In the AI scientific domain, most interest does not go to issues like “whether robots will dominate humans”, but to more down-to-earth, but far from mundane and much more interesting questions like:
- How do humans grasp objects like a tomato?
(we employ tactile feedback, have a memory of the substance, we do apply physical simulation of gravity)
- What distance do we keep when talking to someone?
(it depends on the context, on our relationship with the person)
- “The city councilmen refused the demonstrators a permit because they feared violence.” — to who refers the word they?
(these questions are called “Winograd schemas” and show that a fundamental understanding of natural language is still an enormous challenge)
By framing AI as a research field, it becomes evident that our work is far from finished and that many questions remain unanswered.