Beyond Intelligence

Artificial Intelligence and the crises of purpose, inequality, and climate change

Wissam Kahi
The Quantastic Journal
80 min readDec 10, 2023

--

Introduction: Anxiety and Euphoria

Our current fascination with AI is more about our desire to explore our humanity than about AI itself. Nothing in recent history has sparked as much admiration and euphoria, and at the same time as much mistrust and anxiety, as Artificial Intelligence has in the last few months. However, these contradictory reactions are not surprising; they are the contemporary manifestation of the same response towards science and technology that started with the Scientific Revolution, intensified in the 21st century, and reached its pinnacle with the recent advent of AI. After all, Galileo himself, who pioneered the scientific approach as a vastly superior way of apprehending our world and reaching truths, was faced with initial mistrust by the Church, who foresaw in him the early signs of its waning power. Science was about to start its golden age, effectively shifting power from Religion and Kings to Reason and Scientists.

In the 19th century, we see a similar mixed reaction towards Technology, the brainchild of Science, during the Industrial Revolution: Alexis de Tocqueville lamented the intellectual degradation begotten by industrialization, and the United Kingdom saw the rise of the Luddites who opposed the rise of machinery in the textile industry by organizing the destruction of the machines.

Luddites destroying cotton-spinning machines as re-imagined by Midjourney
Luddites destroying cotton-spinning machines as re-imagined by Midjourney’s AI

That, of course, did not stop the progress. Science and Technology, in fact, gradually penetrated various domains that were historically out of its reach: the realm of the “still non-scientific” (humanities, politics, literature, etc.) dangerously narrowed, and “Science” started requiring specific levels of expertise for domain knowledge, to such an extent that it was more appropriate to separate now between “Science” — as in the Scientific approach — and “sciences” — the various disciplines. In that context, “enlightened” spirits had to continuously shift their trust from the “non-scientific” — whether it’s God, figures of authority, or non-scientifically based traditions — to scientists and “domain experts.”

At the origin of people’s mistrust in science is the realization that how their senses perceive reality is often misleading — e.g., we “sense” that the sun revolves around the earth. Still, Galileo and Copernicus used “science” (observation with telescopes) and reasoning (mathematics) to prove otherwise. Nothing expresses this more elegantly than Galileo’s expression of admiration for Copernicus and Aristarchus, who allowed “reason to commit such a rape on their senses, as in despite thereof to make herself mistress of their credulity.”

Galileo’s Inquisition re-imagined by Midjourney’s AI.
Galileo’s Inquisition re-imagined by Midjourney’s AI.

The history of science since then has been characterized by an increasing distance between what our common sense and our intuitions perceive and scientific reality as evidenced by mathematical equations. Nothing epitomizes this more than quantum mechanics, which is entirely built on mathematical equations that still, to this day, are somewhat mysterious and counterintuitive to the same scientists who use them. This growing disconnect between reality, as perceived by the senses and intuition, and reality, as apprehended by science, has increased and aggravated mistrust and anxiety.

But I believe Artificial Intelligence is taking this to a different level. Unlike the scientific approach foundation, which is essentially rooted in experimental observation and mathematics, Artificial Intelligence is increasingly rooted in Neural Networks and vast amounts of data. The mathematics that underpins modern science may be highly complex, but it is still within the realm of understanding of well-trained human minds. The same mathematics and algorithmic logic also underpin all deterministic symbolic programming languages. But neural networks, particularly those underlying the current Large Language Models that are all the hype today, are built on different frameworks that rely on vast amounts of data and high-speed learning loops. This combination helps the network of neurons to find the proper “plasticity” that would allow it to predict best what should come next¹. While it’s true that statistical models and mathematical tricks to optimize performance are at the origin of the learning algorithms, the approach itself is not deterministic. A funny image that comes to mind to describe the entire operation is that of a rather large group of monkeys that, given enough time, will eventually produce Shakespeare's literature through trial and error rather than the elegance and simplicity of beautiful mathematical equations. If this image seems odd and far-fetched to you, perhaps it will become more apparent as we dive further into the mechanics of Neural Networks and how they learn.

An imaginary depiction of a large group of monkeys trying to write Shakespeare’s Hamlet through trial and error
An imaginary depiction of a large group of monkeys trying to write Shakespeare’s Hamlet through trial and error

So, we have moved from the abstract yet predictive/understandable language of mathematical equations and algorithms to a more mysterious black box of neural networks². We are shifting from the glorification of “reason” and its supremacy over the senses that started with Galileo to the new superiority of the Neural Network black box over reason itself.

This “black box” aspect sheds some light on most people’s reaction to AI: a mix of amazement at what AI can achieve and anxiety as they foresee their human intelligence dwarfed by what seems a “black box.”

Should we then be worried or excited? “ Experts” opinions range from the euphoric claim that this is the most wonderful thing ever happening to humankind to the prophets of doom who are already tolling the funeral bell of human civilization. I believe the “existential risks” — i.e., of total or near total human extinction — are exaggerated for the foreseeable future — unless we are unaware of some vastly superior technology that has been kept hidden from the public. Most importantly, in the near term, we should focus on different AI threats and opportunities. The existential risks that the media likes to focus on may be a distraction from these shorter-term and more real risks, namely the impact of AI on the critical crises of our time: inequality, individual and collective sense of purpose, morality, and climate change.

This article attempts to answer these questions with the following sections and chapters:

Section I: Artificial Intelligence and Human Intelligence

This first section starts with essential definitions of what is meant by Intelligence and Artificial Intelligence

  1. Neural Networks: This is a short primer on Neural Networks since they are foundational to Artificial Intelligence. It will only cover the high-level concepts without diving into details, but it is a bit technical, so feel free to skip it if you are not interested.
  2. Intelligence: We will then tackle the concept of Intelligence, at least to have a common language when we mention this terminology

Section II: The Artificial Intelligence revolution in its historical context

In this section, we will position the AI revolution in its historical context and contrast it with the Industrial and Digital revolutions

Section III: Artificial Intelligence and the critical crises of our time

In this section, we will explore how AI can aggravate or help tackle the critical crises of our time

  1. The crisis of meaning
  2. The crisis of ethics
  3. The crisis of inequality
  4. The environmental crisis

Footnotes:

[1] More on that in the following section

[2] Theoretically, it is not exactly a black box since we understand the underlying mathematics; however, pragmatically, it is, given the vast number of parameters — this has created a whole discipline focused on explainability in AI

Section I: Artificial Intelligence and Human Intelligence

Chapter 1: Artificial Intelligence: How Neural Networks Learn

I will provide some basic concepts of Neural Networks here, along with reading links for those who want to go further. While this section will not dive into the underlying mathematics, some passages are a bit technical, so feel free to skip this section entirely if this is not of interest.

Three fundamental concepts

Neural Networks enable machines to learn—mainly to do “Deep Learning.”

While the execution is quite complex, the link between Intelligence and Neural Networks relies on three elementary concepts:

  1. All information can be represented numerically: this premise is not difficult to accept in the digital age. We know, for example, that text can be represented in hexadecimal digits or that any image can be viewed as a series of pixels defined by their position on the screen (i.e., (x, y) coordinates) and a number representing the color code.
  2. Everything is a function: A function is simply an operation that defines the relationship between a specific input x and a specific output f(x). For example, an “image recognition” function would accept a set of pixels (x,y coordinates and z = color coding for each pixel) showing the picture of a chair as input and would output English characters with the word “chair” = f (x,y,z). An English-to-French translator function would accept “chair” as an input, spit out “chaise” as output, etc. Now, while a specific perfect function theoretically exists within the infinite world of all existing functions, it can be very difficult (or quasi-impossible) to find it. Enter the Universal Approximation Theorem!
  3. Neural Networks can approximate ANY function: What makes Neural Networks very special is the Universal Approximation Theorem, a mathematical theorem from 1989 that proves that no matter how complex a function is, Neural Networks can approximate it to any desired degree of accuracy! This is an absolutely fundamental and remarkable finding as you realize how simple the basics of Neural Networks are. So, what are these basics?

The Basics of Neural Networks

Neural Networks are a particular type of function inspired by the architecture of human brains. Neural networks have interconnected neurons that communicate with each other by firing, i.e., emitting an electrical signal. A schematic for a Neural Network looks like the schematic below, where each layer contains a set of nodes to illustrate the neurons.

To use our example above, the image of the “chair” can be represented as the million pixels representing it, and these pixels would be the values of the million nodes of the input layer (note that the schematic below only shows two nodes as an input layer, but you can easily imagine the same with 1 million nodes).

The hidden layers consist of a set of nodes structured in layers. The schematic below shows two layers, but there can be many, many more in practice—this is the origin of the word “Deep” in “Deep Learning.” The value of each node will be a function of the values of the nodes in the layers preceding it and will impact the value of the nodes in the layers following it.

With the correct functions, the output layer will spit out the word “chair” when the input image pixels represent a chair and the word “table” when the input image pixels represent a table.

An imaginary depiction of a large group of monkeys trying to write Shakespeare’s Hamlet through trial and error
Neural Network Schematic — Source: Author

To understand the basic principles of a Neural Network, we can illustrate with a basic schematic representing only a few nodes.

Illustration for one Neuron — Source: Author
Illustration for one Neuron — Source: Author

Each connection between nodes can be described by two values: a weight wᵢ representing the strength of the node and a bias bᵢ. Each node will be a weighted sum of nodes in the previous layers after applying an activation function σ (Σᵢ(wᵢaᵢ + bᵢ)). The activation function can be any nonlinear function — this non-linearity is critical, although popular choices are the sigmoid function and the step functions (see below for the sigmoid function).

Sigmoid Function — Source: Wikipedia
Sigmoid Function — Source: Wikipedia

Of course, simple Neural Networks with few nodes cannot do much, but as the number of nodes and corresponding weights and biases becomes exponentially large (imagine billions of nodes or more), these Neural Networks become amazingly capable of approximating complex functions.

Training the model

So how do Neural Networks “learn”? This is done by calibrating the various weights and biases to get as close as possible to the real function through a huge number of iterations called “training.” The training will consist of feeding the network a large set of inputs and corresponding outputs (e.g., a large number of images with various objects as inputs and their corresponding English depictions (chair, table, dog, etc.) as outputs. Let’s call f(x) the real—unknown — function we are trying to approximate and g(xᵢ) the approximation that the Neural Network model outputs for iteration i. The objective of the network will be to minimize the difference between f(x) and g(x) (min (f(x) — g(x)), also called the “Loss”. It will do so by adjusting the weights and biases of each node at each iteration. You can intuitively grasp that with a sufficiently large number of parameters (i.e., nodes) and a reasonably large number of iterations, we can get very close to the real function. For example, at iteration 1 million, the Neural Network may still confuse a chair with a table, but at iteration 2 million, that distinction will become more precise, etc. This will essentially create the plasticity of the Neural Network that will approximate the real function. Note that Neural Networks that are capable of the tasks we are familiar with today (image recognition or generation, NLP, etc.) will typically have hundreds of billions or trillions of parameters. One cannot underscore how important the evolution of computing power was to the development of AI.

A depiction of a Neural Network with various weights on select nodes as imagined by MidJourney
A depiction of a Neural Network with various weights on select nodes as imagined by MidJourney

Enter Transformers and the concept of Self-Attention

The story would not be complete, however, without the mention of Transformers. And no! We are not talking about these transformers, by the way!

The wrong Transformer “I have no clue what Self Attention is…”
The wrong Transformer “I have no clue what Self Attention is…”

I am referring to a critical paper by Google in 2017 that introduced the Transformers terminology and the concept of self-attention. This approach took the AI community by storm (Transformer is the “T” in ChatGPT) and put Neural Networks on steroids.

Explaining the concept of Transformers would be beyond our scope here, but we can list their key concepts:

  1. Positional awareness: They recognize (i.e., “encode”) the position of a word, therefore differentiating between “I was so happy that I passed the exam” and “I was happy so I passed the exam.”
  2. Self-attention — contextual awareness: They understand relationships between words, even if they are far apart. This is particularly important for understanding context. For example, it would allow the Neural Network to successfully complete the phrase “He was born in France, and therefore he speaks [……..].”
  3. Parallel processing: This is a consequence of the positional encoding mentioned above, as it allows parallel processing of the input data, eliminating the need for heavy sequential processing, significantly speeding up processing times, and leveraging the power of new CPUs.

These features make Transformers extremely powerful, not just for Large Language Models but for any Neural Network (e.g., Image recognition, Image Generation, etc.).

Transformers have contextual awareness: related words are “close” in the multidimensional space representing them (the above represents a 2-dimensional space for ease of illustration) — Source: Author
Transformers have contextual awareness: related words are “close” in the multidimensional space representing them (the above represents a 2-dimensional space for easy illustration) — Source: Author.

Other types of Artificial Intelligence(s)

We have focused so far on neural networks because they best exhibit the type of intelligence historically reserved for humans, namely image and speech recognition, image generation, self-driving, and natural language processing. There are, however, other “more traditional” types of Artificial Intelligence that are not based on Neural Networks: these would include Rule-Based systems (typically used in medical diagnosis where the rules are well defined), Decision Trees (typically used in credit scoring and medical decision making), Bayesian Networks (used in risk analysis medical diagnosis)and others.

The key advantages of these approaches are:

  1. Interpretability: it’s easier to explain why the decision was made vs. the “black box” effect of Neural Networks
  2. Data and computational efficiency: They require less data than neural networks and are also less demanding in computing power.

However, as mentioned, they are less powerful than Neural Networks.

Further reading and viewing

The above is a grossly simplified explanation of Neural Networks and just a preview of Transformers’ features, but if you want to learn more, below are a few relatively non-technical resources that I found very useful:

  1. A visual proof that Neural Networks can compute any function: A wonderful interactive article by Michael Nielsen that helps you understand the intuition behind the Universal Approximation Theorem
  2. A YouTube video showing Neural Networks learning: a fascinating video showing how Neural Networks can learn to approximate a large variety of functions
  3. Batool Arhamna Haider's visual explanations of the concept of Transformers in the Visual Guide to Transformer Neural Networks
  4. MIT’s course on Neural Networks — to go even further

Chapter 2: Deconstructing Human Intelligence

So should we call these neural networks “Intelligent” as they can interpret images, write poetry, converse with us, or sometimes diagnose medical conditions better than doctors?

Human Intelligence vs. Machine Intelligence as interpreted by Midjourney’s AI
Human Intelligence vs. Machine Intelligence as interpreted by Midjourney’s AI

When I was a kid in the mid-eighties, my father took me with him once to his office, where they processed viewership data for local TV and radio stations. I was so fascinated by these fancy machines that my father promised to buy me a home PC. At the time, the Apple II was all the rage, but it was really expensive, so we settled on ordering one of the clones: a Commodore. I remember I was talking with my friends about this new PC before I received it, and we all started speculating about how wonderful computers were and what they could do. I remember looking at the house across the street and saying: “for instance, I bet you, my PC, can easily tell us who this house belongs to.” My friend said, “I bet you we can ask your PC about things we learn at school, like what the longest river in the world is, and it will give us an answer.” “I bet you it can even tell us what time the ice cream shop closes today if we ask it.” These capabilities, we marveled, would make the computer a super-intelligent being, an all-knowing entity that will relieve us from ever studying anything because we can ask it anything!

When I received my PC a few days later, I was quite disappointed that it was nowhere near answering any of these questions. In fact, I could barely write anything in it except some basic and limited commands. My PC was quite dumb, it turned out, and I avoided the topic altogether with my friends as I felt quite embarrassed … Little did I know that a couple of decades later, all of this would not only become reality but would be commonplace. But it didn’t happen as we imagined it: the PC was not an all-knowing entity; it was simply connected to the internet, and the internet was connected to billions of public databases and entries that contained all these answers. More interestingly, nobody would have qualified this marvel as a “super-intelligent” machine as we had imagined as kids!

A Definition of Machine Intelligence by Shane Legg and Marcus Hutter

“Intelligence” is a peculiar concept, and defining it has kept many experts busy for quite a while, with the definitions they provide rarely converging. For example, an early definition of intelligence from 1921 was “The capacity for knowledge, and knowledge possessed” by V.A.C. Henmon. This definition fascinated me because it is similar to me mistakenly assuming machines to be intelligent if they were all-knowing. It is clearly laughable today because access to knowledge, data, and information is ubiquitous and hardly grounds for calling the machine accessing the data “intelligent.”

Intelligence is not a concept that is easy to define. Psychologists have been struggling with the issue of defining intelligence for times immemorial, and their definitions rarely converge. What is at stake is non-trivial because intelligence is the underpinning of our meritocratic system; it is how we value humans and determine their future success — e.g., ranking students or employees, deciding college admissions, etc. — and, of course, more recently, how we value Artificial Intelligence vs. humans — i.e., is “Chat GPT” smarter than a human?

Marcus Hutter and Shane Legg — who later co-founded DeepMind — wrote a paper in 2007 reviewing the various definitions of intelligence. Their paper ranked and rated the various intelligence tests, as can be seen in the chart below.

An assessment of various Intelligence Tests — Source: Universal Intelligence: A Definition of Machine Intelligence, 2007 — Shane Legg and Marcus Hutter
Source: Universal Intelligence: A Definition of Machine Intelligence, 2007 — Shane Legg and Marcus Hutter

Interestingly, the majority, if not all, of the definitions are essentially “tests,” implying that Intelligence can be measured on a scale. Of course, the most famous test is the IQ test, which was originally designed by French psychologist Alfred Binet in 1905 to determine the “mental” age of children and assess their readiness to attend school.

It is worth pausing in particular on the Turing test, as it is strongly linked to the notion of Machine Intelligence, at least in popular conscience. The popular version of the Turing Test is that a machine can be considered intelligent if a person having a conversation with it can be fooled into thinking it’s human. However, this version is partially apocryphal: the question that Alan Turing proposed to answer was “Can machines think?” and to avoid addressing the ambiguity of the “think” definition, he reformulated the question as “Can machines do what we (as thinking entities) can do?”. Turing then got the inspiration for his test from a popular party game at the time called the Imitation Game, in which a man (A) and a woman (B) go into separate rooms, and an interrogator ( C ) tries to tell them apart by writing a series of questions and reading the typewritten answers sent back. In this game, both the man and the woman aim to convince the interrogator that they are the other. Turing then describes his spin on the game:

We now ask the question, “What will happen when a machine takes the part of (A) in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

The Turing Test re-imagined by Midjourney’s AI
The Turing Test re-imagined by Midjourney’s AI

Many have criticized Turing’s definition of Machine Intelligence, but to be fair, he did not exactly reference intelligence. One critique that is worth diving into is John Searle’s Chinese Room argument. His argument goes as follows (in Searle’s words below)

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have

Jon Searle’s Chinese Room argument re-imagined by Midjourney’s AI
Jon Searle’s Chinese Room argument re-imagined by Midjourney’s AI

What I find fascinating in the debate is that John Searle — a philosopher, worth noting — focuses on the concept of “understanding” (the how) whereas Turing — a mathematician, worth noting — focuses on the output (the what). This debate is perhaps one of the most widely discussed philosophical arguments in cognitive science and one that will still ignite many controversies for a long time. We can already see intuitively from our previous description of Neural Networks that the inner workings do operate like a Chinese Room: even though it’s not precisely a pre-determined set of instructions, the weights are effectively a “learned” set of instructions, whose settings may be obscure to the human architect themselves.

Going back to Hutter and Legg’s paper, it is clear from their approach that their focus is rather on the output vs. the concept of understanding. Indeed, after their critique of the various definitions, they provided an enlightening attempt at synthesis with the following definition: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” In his book Life 3.0, Max Tegmark proposed a similar, albeit simpler, definition: “Intelligence is simply the ability to accomplish complex goals.” Three things worth noting here:

  1. The first definition mentions an “agent,” which is a peculiar choice of words given that the existence of an “agent” implies “agency.” Does that imply some level of consciousness or free will? We will re-examine this later in the section below.
  2. Both definitions emphasize “goals” but remain silent on who is establishing these goals. We will revisit this in the concept of “intentionality” in the section below.
  3. The first definition highlights “universality” (… wide range of environments) — this is particularly important as we start talking about “General Intelligence.”

How humans think and how computers compute

I will not attempt to add an Intelligence definition to the hundreds already provided, but given the understanding vs. output tension, it would be interesting to analyze some illustrative categories of intellectual activities and contrast how humans and machines apprehend them. Let us, for instance, focus on the following categories: computation, deductive reasoning, inductive reasoning, creative thinking, and intentionality.

Computation

Computation is synonymous with mathematical calculation; it includes mathematical equations and algorithms. The key requirement for computation is speed, which computers obviously excel at. In fact, computers are so vastly superior to humans in computation that their own name, “computers,” derives from it. It is hard to equate computation as done by computers with any kind of intelligence, and yet, interestingly, we do consider people with superior mental calculus abilities to be some type of genius, but we would never apply the same qualification to calculators.

Deductive reasoning

Deductive reasoning is the application of generic rules to a set of premises to draw conclusions. It is a key foundation of logic that enables the discovery of specific truths based on general rules, as opposed to the reverse process in inductive reasoning, which we will see later. Deductive reasoning is another area where computing excels, epitomized by the famous “IF-THEN” logical statements. The traditional approaches still require human intervention to define the premises or general rules that allow the algorithm to draw conclusions in specific cases.

Inductive reasoning

Inductive reasoning allows us to draw general rules from specific observations. It is key to the development of scientific theories: by relying on all possible observations at their disposal, scientists infer scientific theories that are considered true until proven false. Inductive reasoning is also the type of reasoning used to make causal inferences: “If the street is wet when I wake up, then it must have rained during the night.” Are machines and Artificial Intelligence, in particular, capable of Inductive Reasoning? To some extent, yes, since machine learning essentially involves learning from a wide range of data to formulate general rules. It then applies these general rules to specific cases using deductive approaches and makes corrections as appropriate. It is this induction/deduction learning loop that is at the core of Machine Learning.

The learning loop illustrated — the agent acts with the environment and receives observation and reward signals. Source: Universal Intelligence: A Definition of Machine Intelligence — Shane Legg and Marcus Hutter
The learning loop illustrated — the agent acts with the environment and receives observation and reward signals. Source: Universal Intelligence: A Definition of Machine Intelligence — Shane Legg and Marcus Hutter

However, inductive reasoning presents at least three distinct challenges:

  1. Black Swan: The first challenge is a limitation that David Hume summarized eloquently in his quote: “No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion.” The implication is that these types of AI models are likely ill-equipped when they face a Black Swan that was not in their training data.
  2. Occam’s Razor: is the application of Occam’s razor, i.e., selecting the simplest hypothesis out of all the hypotheses available. Shane Legg and Marcus Hutter provide an interesting illustration in their paper “A Definition of Machine Intelligence: What is the next number in the sequence 2,4,6,8? The pattern to an “intelligent” person is clear and is as the numbers are increasing by 2. However, this is not the only answer. The polynomial 2k4–20k3+70k2–98k+48 is also consistent with the data, but nobody in their right mind would select that for an answer! Given competing hypotheses, we tend to favor the simplest and most elegant one, but there is no fundamental logical reason for that. However, this is something we need to teach the machine (and luckily, the two researchers do provide an approach in this case to help machines favor simpler solutions)
  3. Causation: Causation is the third challenge presented in inductive reasoning. Indeed, at a high level, causation is akin to inductive reasoning because it relies on observations to draw specific conclusions (i.e., the cause of the symptoms). However, it’s such an important topic that it deserves its own paragraph, detailed below.

Causal reasoning

In his latest book, “The Book of Why,” Judea Pearl introduced the Ladder of Causality to illustrate the three distinct levels of interpreting causation: Seeing, Doing, and Imagining. These three levels require an increasing complexity in intellectual conceptualization.

  1. Seeing relies on observing patterns: A seems to be typically followed by B. AI is most typically comfortable with this because it gets better with the analysis of large data sets that highlight correlations.
  2. Doing so requires intervention to change reality and perform tests to observe the relationship between action and consequence. The best illustration is the child at play who learns causation by messing around with objects. This is fundamentally different from Seeing because the answer may not be “in the data” yet. An AI Agent would need to either have the ability (and intention) to perform these tests and observe consequences or have a causal model to interpret the past data.
  3. Imagining is the ability to deal with counterfactuals, the “what would have happened if …” and is the most difficult conceptually. In the example above, “If the street is wet when I wake up, then it must have rained during the night,” a human leverages their understanding of the physical concept of water making things wet. In contrast, current machine learning techniques will lean instead on pattern recognition: “I observe that every time it rains at night, the streets are wet in the morning.” However, this observation is incorrect in this case because the question of “Would the street have been wet if it hadn’t rained at night?” is more complex. It may be, indeed, because neighbors have been watering their plants or the fire hydrants have been let loose. Similarly, the statement “if the rooster crows, then the sun will rise” is an example of bad causal inferences based on data. Indeed, if the “training data” always has the sun rising after the rooster crowing, the AI may incorrectly conclude a causal relationship. Only an understanding of the physical concepts and a correct model of reality can allow a differentiation between the rainfall making the street wet and the rooster’s crow merely preceding the sunrise. This type of counterfactual thinking is also what drives great thinkers like Einstein to develop thought experiments by asking questions like “What would have happened if I were traveling at the speed of light and light were stationary to me …” This type of conceptual understanding that requires developing a model of reality and counterfactual thinking will be quite a feat for Artificial Intelligence to overcome. It is also a foundation of Creative Thinking, which we will explore below.
The learning loop illustrated — the agent acts with the environment and receives observation and reward signals. Source: Universal Intelligence: A Definition of Machine Intelligence — Shane Legg and Marcus Hutter
Judea Pearl’s Ladder of Causation — Source: The Book of Why by Judea Pearl, 2019

Creative Thinking

Creative Thinking is the process at the source of artistic, literary, or scientific creation. It involves, in particular, the ability to imagine and create what does not already exist: Creative thinking relies on the power of imagination. Artists — the true ones — use this process to create their artworks, novels, poetry, etc. But scientists also use it heavily to imagine new solutions. Einstein, as illustrated above, extensively used the power of his imagination to come up with thought experiments (like riding on trains going at the speed of light) and then used deductive thinking to elaborate hypotheses about potential solutions. The genius, however, is really in imagining these counterfactual thought experiments. As Noam Chomsky argued in a recent article in The NY Times, It is important to distinguish this type of thinking from what current Large Language Models (LLMs) or generative AI tools like MidJourney are capable of doing: these tools leverage massive amounts of data and knowledge to predict the most probable sequence of words or imagery we are expecting. This is the opposite of creative thinking, the success of which is judged precisely by the ability to come up with the most improbable scientific theory or artistic creation.

Intentionality / Volition and goals

There is finally one big elephant in the room we need to touch upon. Humans have the distinctive faculty of “volition” or “intentionality,” the power of wanting something, of desiring a specific future state of affairs and working towards it. This is linked to the debate that has been raging for centuries in philosophical and religious circles between free will and determinism. Are humans really free to choose what they want, or is their behavior pre-determined by external forces they can’t control? This debate is still very alive today. More specifically, in our case, can we speak of the concept of free will for artificial intelligence? Does free will require the emergence of consciousness as a pre-requisite (a much bigger topic that merits its own article)? These questions are keeping thousands of scientists and philosophers busy today. Perhaps it is quite premature to talk about free will for AI, but we can talk about goals more easily. The goals of an AI machine are — and should be — determined by us humans and, therefore, appear clear to us. However, this will not prevent the AI from developing intermediate subgoals that will help it achieve the ultimate goal we set for it. And these subgoals can appear quite mysterious to us. For example, Max Tegmark argued in his book Life 3.0 that the subgoal of “survival” would be implicit in any ultimate goal we set since the AI will quickly realize that surviving is a necessary condition for achieving the ultimate goal. Nick Bostrom illuminated us with the now famous thought experiment of the paperclip problem: if we task an AI with producing more and more paperclips, if it runs out of easy access to raw material, it may end up killing people who have prosthetic legs, for instance, to use the material for paper clips. While this colorful example seems far fetched, the point is that the AI may appear to be behaving in mysterious ways that seem to manifest mysterious “wants”, but in reality may be just some subgoals that the system has found optimal on its path to the ultimate goal.

An evil Artificial Intelligence manufacturing Paper Clips — note it does not have to be evil to kill us :)
An evil Artificial Intelligence manufacturing Paper Clips — note it does not have to be evil to kill us :)

Hearts and minds

It is also important to note that human motivation does not always come from the act of “reasoning” but also from “feeling.” For example, we sometimes do irrational things out of love, empathy, sacrifice, anger, hate, or revenge. Blaise Pascal famously said, “Le coeur a ses raisons que la raison ne connaît point” (Heart has its reasons that the Reason ignores), reminding us that we don’t only think with our brains but also with our hearts.

Of course, passion vs. reason is the subject of many literary works, starting with Greek Tragedy, but it has also been analyzed by neuroscience more recently. Using MRI scans, scientists were able to observe how the emotional parts of our brains very often supersede the rational ones when making decisions, even when we believe we are being completely rational. Gardiner Morse explored this in his HBR article on Decision and Desire. The article details two interesting experiments.

  1. The first experiment is the ultimatum game, an economics experiment setting two participants against each other. One participant is given $10 and can decide to share any amount with the recipient and keep the rest. If the recipient accepts the amount, then they both get to keep the money. If the recipient rejects, then they both lose. As you may have guessed, a rational recipient should accept any amount, however small, because some money is better than none. But of course, human emotions intervene: recipients often reject the offer if it is too small because … well, not because of any rational decision, but because of feelings of injustice, anger, revenge at a stingy giver (who will also lose her share), etc. Alan Sanfey, a cognitive neuroscientist at the University of Arizona, used fMRI scans to monitor the participants’ brains while playing this game. What he observed was essentially a struggle between the parts of the brain involved in negative emotions and the parts of the brain involved in reason; in other words, he observed the passion vs. reason struggle in action and witnessed when the passions won. A goal-oriented AI would clearly have behaved purely rationally: on the receiving end, it would have accepted 1 cent without “punishing” the giver, and on the giving end, it would have offered 1 cent if it knew it was playing against another rational AI. One may argue this triumph of reason over passion, in this case, is actually a good thing. This was, after all, one of the key tenets of the Enlightenment age. Well, it’s not as straightforward as you will see in Experiment 2 below.
  2. The second interesting experiment was related by neurologist Antonio Damasio in his 1994 book Descartes’ Error. He relates the story of one of his patients named Elliott, who had a brain tumor carefully removed from his frontal lobe. This area of the brain did not impact his language and intelligence, which remained intact, but it did impact his ability to generate emotions. For example, Elliott was unstirred when he viewed emotionally charged images of injured people or burning houses. He had lost his ability to feel. Interestingly, however, Elliott apparently started struggling significantly with making purely rational decisions at work, analyzing pros and cons ad vitam aeternam. Researchers found similar behavior in other patients who had injuries to part of the limbic system. Damasio concludes: Emotion is an adaptive response, part of the vital process of normal reasoning and decision-making. Damasio called this contribution of emotion a “prehunch” and further showed how people lacking this capability were found to be unusually slow to detect a losing proposition in a card game, showcasing how the prehunch often kicked in before the rational realization (see article for more details).

Artificial Intelligence obviously focuses uniquely on the “Intelligence” aspect, entirely neglecting the “Feeling” dimension. It is Reason devoid of Passion. On the surface, this may be a good thing, but it raises important challenges about the capability to take action or make decisions. Could we one day imagine an Artificial Feeling entity to complete Artificial Intelligence? This topic will also profoundly impact the “Crisis of purpose and meaning,” which we will explore later.

Could we one day imagine an Artificial Feeling device? Depiction of a heart made out of electronic circuitry and chips as imagined by Midjourney’s AI
Could we one day imagine an Artificial Feeling device? Depiction of a heart made out of electronic circuitry and chips as imagined by Midjourney’s AI

Can we score intelligence?

In conclusion, my point is not to argue whether Artificial Intelligence can, or will soon be able to, apply the types of thinking illustrated above, nor to argue that intelligence — regardless of the definition — HAS to be limited to biological brains. There is no reason to believe that intelligence cannot be “substrate” independent, as the philosopher Sam Harris has claimed. What I wanted to illustrate with the previous points is that human intellectual activities are far more complex than what seems to be implied by some who would like to reduce intelligence to a linear one-dimensional scale like IQ. This approach is clearly overly simplistic and explains why simple questions like “Is student A smarter than student B?” or “Is AI as intelligent as humans?” do not — and should not — have a simple “yes/no” answer because intelligence is multi-dimensional and cannot be reduced to a single scale. Indeed, AI is already infinitely more “intelligent” than humans in various fields of “Narrow intelligence” involving computation and some forms of deductive reasoning. For example, it is already significantly better than humans at mathematical calculations or playing Atari Games or Chess or Go. However, it is not yet capable of achieving the type of conceptual understanding illustrated in the Inductive Reasoning above.

It is also quite misleading to apply anthropocentric qualities to AI. AI demonstrates superhuman qualities in certain aspects and clearly sub-human qualities in other aspects. Anyone who has had conversations with ChatGPT can be amazed by its superhuman eloquence, speed, and erudition. At the same time, it is quite dumbfounding that it makes basic mistakes in quite simple tasks, such as identifying whether a number is prime or not. This demonstrates the lack of “general intelligence” in the underlying framework: a middle school kid can understand the concept of primality from very few “data points.” Once this concept is understood, they can identify the mistake in the sentence “6 is a prime number,” even if they have only been exposed to millions of documents that claim that “6 is a prime number”. The underlying framework of ChatGPT does not allow it to correct that error, which is demonstrated by its inability today to detect accurately and consistently whether large numbers are prime or not. This, of course, can easily be corrected by an algorithmic plugin, but the point is still that ChatGPT is failing at the concept of “learning” in this context. The paradox of the Roboticist and futurist Hans Moravec is enlightening in that regard: He observed how reasoning overall requires much less computation than sensorimotor and perception skills. Said differently, it’s much easier for humans to recognize a cat than to play chess at a master level, but it’s exactly the opposite for machines. Moravec attributes this paradox to the billion years of experience that helped us develop sensorimotor skills necessary for our survival vs. the relatively recent advent of reasoning skills.

The implication of the preceding analysis of intelligence is that we should be careful not to confuse a good “emulation” of human intelligence with the real thing. It’s a bit like believing an illusionist is actually making a person float or cutting someone in half when, in fact, we know deep down they are just playing a trick on our senses. But the reality is that AI can emulate intelligence wonderfully well, and if we only worry about the output, does the process — and in particular its lack of “humanity” — really matter? The question is genuine and not rhetorical. As AI evolves, one of the most important issues we will face is to decide what we can outsource to it and what we should not. This is something that has started and will continue to spark heated debates.

There is one thing that everyone agrees on, however: this is a major technological revolution, and we have little perspective on how and at what pace it will unfold, what important impacts it will have on our societies, and how we should respond. I believe part of the issue is that we rely too much on technology experts to guide us through these profound changes when we need a multi-disciplinary approach to help shape our future strategy, particularly with the help of philosophers and historians. When the future is uncertain, perhaps we should look at the past for guidance. While history alone cannot be a good predictor of the future, it would be a mistake to overlook it completely. History doesn’t repeat itself, but it does rhyme, as Mark Twain supposedly said!

Section II: The AI revolution in its historical context

Chapter 3: The Past and the Future

Given this is a technological revolution not unlike the major revolutions we have had in the past, we may ask ourselves the question: how have our civilizations navigated past technological shifts, namely the Industrial Revolution of the 19th Century and the Digital Revolution of the late 20th century? What went well, and what could have been done better? Are there commonalities and lessons learned we can extrapolate to this shift? What’s different about this specific case?

As we analyze the Industrial and Digital Revolutions, it is important to note the role of Capitalism as a “prime mover” of both revolutions and certainly of the upcoming AI revolution. In particular, three supreme capitalistic values have been guiding principles of both technological shifts: growth—producing more, quality—producing better, and productivity—producing faster.

These all seem like virtuous values, but as we will see, they all have a dark side as well, in particular in their undeniable role in the major crises of our Modern Age: The crisis of Inequality, the Moral crisis, the crisis of the Environment, and the crisis of Meaning. Indeed, while the existential risk of Artificial Intelligence is not something to take lightly, the current focus on it is making us lose sight of these less sudden but more likely crises that are at serious risk of being aggravated by Artificial Intelligence.

In the first part of this article, I would like to examine both revolutions through the lenses of these three supreme capitalistic values (growth, quality, and productivity). In the second part, we will see how they have contributed to the three crises. In the third part, we can assess how Artificial Intelligence can either aggravate them or contribute to solving them.

The Industrial Revolution

The scientific approach we described in the first section was the necessary precursor to the Industrial Revolution, but two important shifts are inextricably linked to its emergence: a technological shift (the steam engine) and an organizational shift (the concept of division of labor and specialization).

Before the Industrial Revolution, the production of artifacts was the domain of craftsmen: woodworkers, stone workers, masons, tailors, and blacksmiths. In many places, these craftsmen also formed guilds that controlled the entrance conditions into a craft. This highly inefficient method of production effectively limited the supply and increased the prices of artifacts, with many “consumers” fabricating what they needed domestically instead of buying it. Growth was naturally stifled by the limited supply of biological human energy power.

The technological shift: The invention of steam engines and the discovery of coal pushed the boundaries of “power” and “energy” at man’s disposal significantly beyond the biological limits (man or animal) to mechanical limits and by introducing “stable” energy that can be stored and transported (coal heating water) vs. localized and unstable (wind or waterfalls).

The First Steam Engine re-imagined by Midjourney’s AI — I love how the AI confidently creates something entirely fictional, fills it with jibberish, and yet still manages to make it look like something historical
The First Steam Engine re-imagined by Midjourney’s AI — I love how the AI confidently creates something entirely fictional, fills it with jibberish, and yet still manages to make it look like something historical.

The organizational shift: On the organizational and process side, the specialization and division of labor proved to be the perfect recipe to harness this technological shift. A good illustration is Adam Smith’s famous pin factory example: Pins manufacturing had 12 different steps, and Smith stated that if each of the workers did all 12 steps from start to finish, production would be very low, but if you adopted “division of labor” and assigned one person to each of the steps, then the same number of people would produce many more pins per day!

Illustration of the pin factory — Source: Adam Smith’s “Wealth of Nations”
Illustration of the pin factory — Source: Adam Smith’s “Wealth of Nations”

Half a century later, the seed that Adam Smith had planted with the division of labor was elevated to the rank of a scientific discipline under the intellectual leadership of Frederick Taylor with the Efficiency Movement, which introduced principles such as Standardization, Time and Motion studies, and worker performance monitoring. Most importantly, he introduced the dichotomy between the “Thinkers” — Managers with the intellectual capacity to conceive a more efficient way of doing the work — and the “Doers” — who should execute efficiently without questioning the plan.

The combination of these technological, organizational, and process shifts was explosive. Humankind was liberated from the limitations of its biological muscle — physical strength was no longer a necessary skill for work as it was supplanted by machines, while specialization and division of labor meant that skill and manual dexterity were also no longer necessary! The speed at which anything could be produced increased dramatically and led to significant wealth creation for organizations that could master both the technological and organizational aspects.

An important side impact of this shift was standardization: the fact that production was concentrated in a well-defined process meant that it produced identical products. This seems quite trivial for us today — I do expect my Ikea Pello brown chair to be identical to your Ikea Pello brown chair that you may have bought on a different continent — but think how odd this may have seemed to a consumer in the pre-industrial era, where the product was very closely associated with the individual who produced it. This standardization was a defining element of the concept of Product Quality in the Industrial Revolution.

Illustration of the pin factory — Source: Adam Smith’s “Wealth of Nations”
The Spirit of the Industrial Revolution as imagined by Midjourney’s AI

The Digital Revolution

Let’s jump just another few decades later, and everything starts happening so much faster now. While the seeds of the digital revolution were planted with electronic breakthroughs, such as the invention of the transistor in the 1940s, the trend only solidified when computers became more affordable and mainstream in the 1980s. The deployment of the internet in the 1990s and the proliferation of smartphones in the early 21st century took this to a global scale at a speed never seen before, touching not only developed economies but the entire world.

If the Industrial Revolution had liberated humankind from the limits of biological muscle, this revolution liberated humankind from the limits of its biological brain. “Non-complex” mental tasks were much more efficiently run by computers, and communication allowed ideas and concepts to flow across the globe. This further facilitated the transition away from the most menial tasks and massively accelerated all three capitalistic supreme values that started with the Industrial Revolution (Growth, Productivity, and Product Quality). Growth was now fueled by globalization, itself facilitated by the Digital Revolution — teams and processes could now span different continents with information flowing through them. Productivity was dramatically enhanced with advances in automation and optimization algorithms.

Quite importantly, the digital revolution also introduced convenience as a new critical dimension of product quality, epitomized by the image of consumers ordering their food, entertainment, clothes, etc, all from the comfort of their couch. This is important as it elevated the notion of minimizing physical and mental effort as a key attribute, and it will play an important role further on, as we will see.

A Millennial consumer binge-watching shows on Netflix while ordering food and products online, all from the comfort of their couch — the poster child of the ultimate “Convenience” value of the Digital Revolution. Imagined by Midjourney’s AI
A Millennial consumer binge-watching shows on Netflix while ordering food and products online, all from the comfort of their couch — the poster child of the ultimate “Convenience” value of the Digital Revolution. Imagined by Midjourney’s AI

The Artificial Intelligence Revolution

Why should we now speak of a Revolution for AI? Do we have enough distance to apply that label? After all, there have been countless new technologies that were called “revolutionary,” and while they were undeniably important, we hardly call them a revolution as we do with the Industrial and Digital Revolutions. We reserve this label to movements that typically check at least two boxes: A paradigm shift in technology and a massive social/demographic impact. Think, for instance, how industrialization impacted massive urbanization and how the digital revolution significantly accelerated globalization.

There are reasons to believe that the AI technologies we are witnessing today meet these criteria, indicating the start of a new technological revolution rather than just an extension of the Digital Revolution.

Neural Networks — The Paradigm Shift: From a technological standpoint, these AI technologies that rely on neural networks represent a paradigm shift from the symbolic or rule-based approaches or other AI techniques (see “Other Types of Artificial Intelligences in the section above”) that have been the underlying structures of the Digital Revolution so far.

We covered the basic principles of neural networks earlier, so we will focus here instead on how they became so critical to the AI movement. It is worth noting that the theoretical foundations of Neural Networks are not new and that the paradigm shift we are witnessing today is, in great part, thanks to technical advances, not new scientific or theoretical discoveries. To better understand why, we need a short historical interlude. In fact, the term “Artificial Intelligence” can be traced back to the 1950s — first introduced by John McCarthy — and in 1960, Frank Rosenblatt created a machine called the Perceptron and later the Mark I that “learned” to recognize large printed block letters (A, B, C, D) that were “fed” to it. The machine would eventually recognize the letters after several iterations with a technician who would tell the machine if it was right or wrong, therefore demonstrating learned behavior. This was one of the first manifestations of neural networks that would eventually power the technologies we see today. The technology had created a lot of excitement at the time, with the New York Times writing of a “… computer that is expected will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence”!

Frank Rosenblatt with his Mark I perceptron(left) and a graphical representation of it(right). Image source: https://data-science-blog.com/blog/2020/07/16/a-brief-historyof-neural-nets-everything-you-should-know-before-learning-lstm/
Frank Rosenblatt with his Mark I perceptron(left) and a graphical representation of it(right). Image source: https://data-science-blog.com/blog/2020/07/16/a-brief-historyof-neural-nets-everything-you-should-know-before-learning-lstm/

This optimism was grossly misplaced, as it turned out. The “AI” movement, and Neural Networks in particular, would traverse what is known as an “AI winter” in the following decades. Other approaches, such as Symbolic or rule-based AI, which relied on explicit rules and algorithms, showed more promise. Most importantly, traditional algorithmic-based approaches worked well with existing hardware and were very successful in fueling the Digital Revolution. In the late 20th century, AI was the revolution that never happened! However, despite the blowback, Neural Network research continued, almost as an underground under-funded discipline, thanks to the grit of a few individuals like Geoff Hinten and Yann Lecun, to name a couple. It would take many decades before they would be proved right: Neural Networks would prove amazingly efficient — more so than symbolic approaches — at various activities such as Optical Character Recognition, image recognition, translation, and Natural Language Processing. It turned out that the theoretical foundations for Neural Networks were not flawed at all like many believed— these foundations subsequently benefited from significant enhancements, such as the critical Transformer concept in 2017 we covered earlier— but the underlying concept was the same as in the 1960s. What was missing was the decades of massive enhancements in data production, memory, and especially computing power that was many, many, many orders of magnitude larger than what the pioneers of these technologies had access to (think Moore’s law over 50 years!). That technical progress would finally prove the superiority of Neural Networks and Deep Learning approaches over symbolic approaches on many “intelligent tasks.” This is why the current success of AI can be called primarily a feat of engineering, not necessarily of science. (For a wonderful and complete recount of the history of AI, check out the great book by Cade Metz, Genius Makers).

Moore’s law as imagined by Midjourney’s AI. It’s jibberish, but when looked at from afar and without your glasses, it looks very convincing and extremely well-documented!
Moore’s law as imagined by Midjourney’s AI. It’s jibberish, but when looked at from afar and without your glasses, it looks very convincing and extremely well-documented!

The Advent Of Non-Human Intelligence — A New Social Experiment: In the 17th century, Galileo shook the foundations of the established order by disproving the long-held and deeply entrenched belief that the earth was at the center of the universe. By doing so, he also declared the victory of reason over dogma and put human intelligence at the center of the universe. Reason became the “Archimedean Point” that would allow Man to theoretically unveil all the truths of the universe. Since then, we have defined ourselves as the most intelligent species, with intelligence being the supreme quality that puts Man in a different category from any other species. Artificial Intelligence is now shaking this new foundation: perhaps, after all, Human Intelligence is not inherent to human biology and can be attained — and even wildly exceeded — in non-organic matter.

In the earlier section, we compared the types of thinking between Human and Artificial intelligence by tackling the process and contrasting the approaches to specific intellectual activities. We concluded that the approaches could differ, but we also stressed that the output could be surprisingly similar. We can focus a bit more on these outputs of intellectual activities now. Hans Moravec, whose paradox we looked at earlier, has a great analogy that has been beautifully illustrated by Max Tegmark in his book Life 3.0

Hans Moravec’s “landscape of human competence” illustration of the rising tide of the AI capacity. From Max Tegmark Life 3.0 (2017).
Hans Moravec’s “Landscape of Human Competence” illustrates the rising tide of AI capacity. From Max Tegmark Life 3.0 (2017).

The illustration shows the landscape of human competence. The islands and their relative elevations represent human activities and how difficult it would be for AI to reach them. The rising water levels represent the areas that computers or AI have already covered, such as Arithmetic, Chess, Jeopardy, etc. Note that since Max Tegmark published his book (a mere 5–6 years ago), several of these islands have already been submerged: Translation, Writing, Art, etc. In fact, there probably isn’t a single competence on this map that is completely “dry” and hasn’t been at least partially submerged. This realization may take some by surprise if they haven’t been paying attention and connecting the dots. It became much clearer when ChatGPT and image generation services like Midjourney publicly released their models, showing astonishing progress in the field of image and language generation. Both of these fields are, incidentally, a primary manifestation of human creativity, an area that was thought to be at the highest peaks in the landscape of human competence. Regardless of whether the machines are exhibiting what we should call intelligence or not, they seem to be passing the Turing Test with flying colors, as we hinted earlier — perhaps so well that it’s ironically its speed and perfection that is the telltale that it is not human and making it fail the test.

We can, of course, argue that this type of AI is still far from matching the human genius of “a” Shakespeare, Newton, Mozart, or Picasso and that the creative part, at least, is still at the level of a mediocre B-, as Noam Chomsky argued in the NY Times article we mentioned earlier. But that misses the point that 99% of current human production does NOT require geniuses. If, at a minimum, we can outsource a third of our intellectual activities to AI, that fact alone is sufficient to massively revolutionize our societies, as we will explore later.

There probably isn’t a single discipline that hasn’t or will not be impacted in the very short term by AI, but just to list a few:

  1. Healthcare: Google DeepMind’s AlphaFold has been an amazing success in the field of protein folding, a critical aspect of leveraging a protein composition to understand the ultimate protein shape and, consequently, its potential function. A given protein can adopt more potential shapes than the number of protons in the known universe, and AlphaFold’s current database is said to already have ~1M predictions. This effort represents a massive decrease in the timeline of certain drug discoveries.
  2. Self-driving cars: This marvel of engineering that allows vehicles to operate in the wildly chaotic environment of dense city traffic, for example, is still a product in the making. It relies on a combination of technologies (Computer Vision, LiDAR or Radar, and some Rule-Based systems for traffic or navigation), but the core of the intelligence is, of course, Neural Networks and Deep Learning.
  3. Medical diagnosis: AI has been very successful in diagnosing diseases like breast and lung cancer — beating radiologists at the task by ~20%
  4. Art creation: See all illustrations in this article! Certainly, many can argue this is very “bad” art, and I wouldn't disagree … For more business-oriented applications, besides adding significant efficiencies to the design teams, one of the most promising applications is the hyper-personalization of ads and designs to individual user preferences (also captured by AI).
  5. Large Language Models and Natural Language Processing: The applications are countless. For example, imagine the power of an AI listening in on millions of customer service phone interactions, summarizing them, sending the summaries to the customer and the user, and then providing strategic recommendations to the company as a synthesis. Or training the agent on how to lead the customer call better.
  6. Chatbots: This is another application of large language models. We mentioned earlier how chatbots are today passing the Turing test with flying colors (perhaps revealing their machine nature because they’re precisely too good). But once again, the applications are multiple, from powerful virtual customer service agents in business to virtual companions for the elderly or virtual girlfriends for the lonely (remember the movie “Her”?).
  7. Deepfakes: Well, sadly, it’s not all rosy, and I saved the worst for last. AI has made it easier than ever to generate deepfake (video) images and text. Nina Shick, author of “Deep Fakes, the Coming of the Infocalypse,” predicts that the vast majority of the content on the internet will be AI-generated / deep fakes by 2026 … and many other experts agree! This, of course, has started to have a profound impact on our collective psyche and on our relation to knowledge and truth, as we will explore later.

These few examples — and there are many others — show us undoubtedly that we are on the verge of a new revolution. The impact on our society and us as individuals has started to become felt but will only intensify in ways that are mostly still unpredictable. Let us explore some of this impact in the following section.

Section III: Artificial Intelligence and the key crises of our time

Chapter 4: The Crisis of Purpose and Meaning

It is by now a very well-known fallacy that technological progress and social progress go hand in hand. Technological shifts like the ones that triggered the industrial and digital revolutions are rather stories of profound impacts that lift some and crush others and of shock waves that can take a very long time to stabilize. The Industrial Revolution shifted significant work from agriculture to industry — and led to migration from rural to urban centers. The Digital Revolution accelerated the growth of the service economy, and we are starting to see the reversal of the rural-urban migration trend, with remote work and outsourcing becoming increasingly easier.

Both revolutions carried the promise of prosperity and economic growth, and they delivered on their promise — at least on average and for the majority. This is not surprising, given that the driving forces behind these revolutions were growth, quality, and productivity, all prime movers of both the capitalistic and communist regimes that defined the 20th century.

But their promise to bring more meaning and purpose to life has failed, at least for a majority of people. This promise was to free humans from menial tasks (manual or mental) so that they could focus on more meaningful and creative endeavors. However, it is precisely the opposite that happened for the majority of the workforce: instead of having humans focus on the higher level meaningful tasks, the “system” needed humans instead to “fill the gaps” that have not been filled by automation or digitization — tasks that were quite often manual, meaningless and requiring low skill levels. The AI revolution carries the same promise, if anything, with much more force and determination this time. However, the risks to our sense of purpose are proportionately higher, as we will examine below.

Much ink has been spilled on this topic. In the following paragraphs, I would like to expose the approach of a few illustrative thinkers, starting with Adam Smith and Frederick Taylor -(representing the intellectual founders of the growth/efficiency paradigm), Karl Marx (on surface a critic but in reality also a proponent of the same tools albeit for different visions) and then Simone Weil and Hannah Arendt, two prominent thinkers of the mid-20th century whose theories are still very relevant today despite being more than 60 years old.

From Craftsmen to Laborers

Take the example of craftsmen. Before the Industrial Revolution, craftsmen built the things that shaped the world, from shoes and clothing to cathedrals. They operated their tools with mastery, took pride in their work, and produced unique items celebrating authenticity. The product of their work bore the signature of the producer.

After the Industrial Revolution, the factory men and women who replaced the craftsmen focused on standardization and efficiency instead of authenticity and uniqueness. The product of the labor — now divided into discrete steps with different people performing each step — now became an alien object to the laborers. The objective of the workers was to make sure quotas were met, working at the service of the “machine” that sets the pace for them. Charlie Chaplin’s famous factory scene in Modern Times is a perfect illustration of the absurdity of the resulting work and the ensuing loss of meaning.

The craftsman (Homo Faber) vs. the Factory Workers (Animal Laborans) as re-imagined by Midjourney’s AI (Hannah Arendt’s terminology)
The craftsman (Homo Faber) vs. the Factory Workers (Animal Laborans) as re-imagined by Midjourney’s AI

While the pre-industrial era glorified the talented craftsman and put the emphasis on “Man the fabricator,” builder of Things (Homo Faber, to use the terminology of Hannah Arendt), the Industrial Revolution shifted the focus from the Product to the Process and glorified Labor (Animal Laborans to quote Hannah Arendt again). The typical heroes of the Renaissance were people like Brunelleschi or Leonardo da Vinci, who mastered a variety of crafts and were famous for their “works.” In contrast, the heroes of the Industrial Revolution were the Carnegie and the Fords, who were masters at combining both the power of the technological shift with the power of the Process. They were famous for their “approach” and eclipsed the people who did the work.

But this was also clear to the contemporary observers of the Industrial Revolution themselves. Take Adam Smith’s pin factory example mentioned earlier: Karl Marx himself had his own take on it. Karl Marx put great emphasis on the importance of the worker giving meaning to what they are doing: If a person performs all 12 steps, you care about the pin, but if you make 1 step only, then you would care much less about the end product. Marx described this in his concept of “alienated labor.”

We know, of course, Adam Smith’s argument on the necessity of the division of labor. There is no question that he wins the argument of efficiency and economic growth — mass production would be impossible without division of labor — but certainly at the expense of meaning and pride for the employee. Interestingly and most surprisingly, Adam Smith himself recognized the deleterious effects of this specialization as he argues: “The man whose whole life is spent in performing a few simple operations, of which the effects are perhaps always the same, or very nearly the same, has no occasion to exert his understanding or to exercise his invention in finding out expedients for removing difficulties which never occur …”. Later, Frederick Taylor would also make a similar claim in his Efficiency Movement philosophy, but this time unapologetically:“In the past, the man has been first, In the future, the System must be first.” One can only conclude from such claims that for Adam Smith and Frederick Taylor, this has to be a necessary evil for the greater benefit of the economy, an individual “sacrifice” for the “greater good”. As such, the Industrial Revolution marked the beginning of the disconnect between the “producer” and the “object” of their production, with several layers now separating these 2 entities.

“I had no idea all these metal grills I produce all day end up in this bread toaster!” Imagined by Midjourney’s AI
I had no idea all these metal grills I produce all day end up in this bread toaster!” Imagined by Midjourney’s AI

The disappearance of “thought.”

Later in the mid-twentieth century, we see the same themes reinforced by more modern industrialization and machinery. Simone Weil, a French philosopher from the first half of the 20th century, developed a perspective on work based on her own experience working in French factories — which she voluntarily joined after leaving her comfortable teaching position. She viewed work as a “pivot” towards liberty and insisted on the importance of thought preceding and accompanying manual labor. However, this is not what she witnessed in the Parisian factories. Instead, she saw how “thinking” for a typical factory worker could actually be a dangerous thing for their career; it was an activity that was better left to the “organizers” of work. The worker just had to follow the cadence of the machine. She mentions how she felt herself “dehumanized” by the system. She developed there an ambivalent attitude towards technology and automation: while she recognized their capacity to add to man’s freedom by liberating man from difficult and tedious physical effort, she also had to acknowledge the reality that they were brutally destroying the relationship between thought and action. The fault, of course, was not in the machine itself but in the organizational system around it. The problem was that the machine was transformed from a mean at the service of man to an end in itself. In the service of the values of productivity and growth, the employee becomes a “human resource” and the citizen a “consumer.”

The pin factory, or even the more modern example of Simone Weil above, may seem too old, but we can see the same trend in the service economy, which was accelerated by the Digital Revolution. Take, for example, the image of the typical outsourced call center where the calls of agents are carefully scripted to maximize customer satisfaction but never at the expense of productivity. The key metric that governs these agents is “the number of cases closed per day or per hour.” The instances where the agents can connect the dots of their contribution to the ultimate product or service are rare and inexistent: I still remember the moment when a customer agent was moved to tears as I told her how her help was important in solving our issue. She mentioned that in years of her career, she very rarely received genuine feedback on how what she did mattered.

A dystopian Call Center as imagined by Midjourney’s AI
A dystopian Call Center as imagined by Midjourney’s AI

How this “alienation” and “disappearance of thought” will play out with Artificial Intelligence is still unclear, but early indications are worrying. In the Industrial Revolution, “thought” was dissociated from action indeed for the factory workers, but it remained the privilege of the organizers of work: supervisors and managers. AI now threatens these managerial roles more profoundly: Just like the machines set the pace for manual labor in the Industrial Revolution, AI can set the pace and guide intellectual labor: set the quotas for the supervisor, write emails for the manager, guide the conversation for the customer service rep … Hans Moravec’s paradox — where he argues that it is easier to automate complex reasoning than sensory motor skills — shows us that AI will impact supervisory and middle-management jobs faster than it impacts the low-skilled jobs. Let’s consider the middle-management and supervisory jobs in the examples above:

  1. The manager responsible for setting the production quotas based on demand and supply patterns will soon see their job heavily impacted by AI, and their role will likely be reduced to much smaller and simpler tasks, like communication with the factory floor. This will happen more easily and is much less costly than investing in heavy machinery to automate an additional segment of the production process.
  2. The call center supervisor responsible for managing, coaching, and scheduling the agents will see a significant disruption as well. Calls received by agents will be automatically transcribed, with a real-time sentiment analysis providing real-time coaching to the agent as to how to lead the conversation. The AI will also decide when to reimburse certain customers, for instance, based on the estimated lifetime value of each customer and their future importance. The AI will also decide how to distribute the bonuses based on the ranking of the agents. The outstanding role of the supervisor is reduced to much simpler tasks, perhaps more janitorial in nature.

In summary, we can see how more sophisticated jobs are, in fact, more affected than simpler ones, with AI reducing humans' agency and freedom of thought to the maximum extent. Of course, this will be in the great interest of productivity, quality, and standardization, but it will also further contribute to the alienation concept we mentioned above.

I like to call this phenomenon the “narrowing of the Human Value Zone” to indicate how the areas requiring human judgment and agency and becoming increasingly fewer, just like Hans Moravec’s “landscape of human competence” illustration with the rising tide of the AI capacity showed us. This limitation placed on “thought” in the workplace has also been raised by other thinkers like Simone Weil as described below.

Beyond labor

More than 60 years ago, Hannah Arendt analyzed the condition of society and human existence following the profound changes brought over by industrialization and the scientific breakthroughs of nuclear power and space exploration in her famous magnum opus “The Human Condition.” Her analysis is still amazingly pertinent today.

Arendt distinguishes between 3 different forms of human activities:

  • Labor refers to the activities necessary for basic survival, including earning a wage. This encompasses the vast majority of employee work (Animal Laborans)
  • Work involves creating new artifacts and building the things that shape our world through craftsmanship. The best example would be the work of artists, craftsmen, and, most importantly for us here, inventors and scientists (Homo Faber)
  • Action refers to the effort we make together as political creatures to come to collective decisions about our collective course of action. Speech and exchange of ideas in a pluralistic society are the best manifestations of Action.
Hannah Arendt’s Labor, Work, and Action as Re-imagined by Midjourney’s AI
Hannah Arendt’s Labor, Work, and Action as Re-imagined by Midjourney’s AI

Arendt argues that a healthy and meaningful life involves a decent balance between these activities, each given its proper place and significance. She makes the same observation we made above about how the Industrial Revolution and modernity have disturbed this balance, shifting it from Work (craftsmanship) to Labor. Her judgment is, in fact, more severe: “There can be hardly anything more alien or even more destructive to workmanship than teamwork, which actually is only a variety of the division of labor and presupposes the “breakdown of operations into their simple constituent motions.” — and yet she also acknowledges that this division of labor is the only way to achieve the mass production necessary for growth.

But she goes beyond this “Work to Labor” shift and also foresees quite well the eventual technological revolution — which we are now calling the AI revolution — whereby labor is gradually disappearing and warns us of the terrible prospects of a society without labor, simply because this society does not know what to replace labor with. In her words:

The modern age has carried with it a theoretical glorification of labor and has resulted in a factual transformation of the whole of society into a laboring society. The fulfilment of the wish, therefore, like the fulfilment of wishes in fairy tales, comes at a moment when it can only be self-defeating. It is a society of laborers which is about to be liberated from the fetters of labor, and this society does no longer know of those other higher and more meaningful activities for the sake of which this freedom would deserve to be won. […] What we are confronted with is the prospect of a society of laborers without labor, that is, without the only activity left to them. Surely, nothing could be worse.

What Arendt is alluding to here is that our society has lost the art of “Action,” that is, the exercise of political speech and exchange of ideas in the public realm. To Arendt, the best illustration of this Action is the free citizens of Athens in Antiquity who, liberated from the toils of labor thanks to the chores outsourced to household slaves, had the free time and the responsibility to participate actively in the political life of the city. Arendt’s utopia in that regard is not one that takes us backward to the Athenian times (a deeply patriarchal slave-owning society), but rather a utopia where humans can be emancipated from labor — for example, by AI and machines — to dedicate more time towards the more meaningful aspects of the activity of labor. Interestingly, she points out that this emancipation of labor was also Marx’s utopia when he observed that only when labor is abolished can the “realm of freedom” supplant the “realm of necessity.” For “the realm of freedom begins only where labor determined through want and external utility ceases,” where “the rule of immediate physical needs” ends …

In her forward to Arendt’s book, political scientist Danielle Allen reminds us why all this is relevant to our discussion here:

Science tempts us into thinking we can put an end to politics and transform the human condition into a series of technical problems amenable to definite solutions. The danger that flows from this temptation, Arendt finally says explicitly at the end of the book, is that “an unprecedented and promising outburst of human activity . . . may end in the deadliest, most sterile passivity history has ever known.”

This temptation is already apparent in the Artificial Intelligence definition by Hutter and Legg, two influential AI personalities, that we evoked earlier: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” — The focus on the “How” in this definition could not be clearer, and it is evident from the further developments that this focus held true in practice. The questions of “What” the goals should be and “Why” becomes quite secondary.

It would be a mistake to think that Arendt is discrediting science and technology with her critique. She simply reminds us that we must consider science as a tool for our political “Action” and not vice versa. She is pushing us to regain coherence and balance between these three activities: Labor, Work, and Action. She summarized her message with a very simple yet powerful message: “What I propose, therefore, is very simple: it is nothing more than to think WHAT we are doing.”

A gradual loss of identity?

We have explored the crisis of purpose at the collective level, but we can observe a similar crisis at the individual level.

A few months ago, my sister found in her attic old letters that I used to send her when I was a student living abroad. This was pre-email, and at times when international phone calls were so expensive that we took the time to write actual letters. As I was re-reading my younger self from almost 30 years ago, what struck me, in particular, was the patience I took to relate events and to reflect on them, not to mention the attention I gave to style. I would never write like this anymore. With the emergence of email, and later of messaging and social media platforms, brevity and the “clever” use of emoji effectively put a nail in the coffin to this epistolary art, but also to the therapeutic nature of examining one’s life through it. To me, it was a profound reminder that progress is not universal in all dimensions: It gives and takes. The price of speed and efficiency was a loss in my capacity for depth and reflection.

But we are now on the verge of another important transformation: we can start outsourcing even the act of communicating with clever LLMs. This happened with several phased enhancements:

  1. Spelling and grammatical correction relieved us from the rigor of remembering how to spell and structure phrases.
  2. With LLMs, we can now dictate the intent of communication and let the LLM develop communication with superior eloquence—perhaps by giving it a personal touch by drafting it in our “style” through an analysis of all the personal writings we have ever produced.
  3. Of course, we will soon reach a moment when we will let the AI take the initiative to react to the emails and messages we receive and draft responses for us to approve.
  4. As we develop trust in this amazing personal assistant, we will let it take the initiative to answer some of the less critical conversations and gradually some of the more critical ones (I was recently targeted by an ad in which the hook is how amazing the AI digital alter-ego is at drafting business memos and a break-up message to your significant other! One less thing to worry about!).
  5. In the most dystopian scenario, I can already imagine a world where machines eventually communicate together on our behalf almost without our knowledge.

At what point in the above progression did the individual lose their own identity in the communication? At what point can they still confidently say, “Hey, it’s me writing,” and at what point can they no longer say it? Perhaps the positive side is that all of this will free significant time for us to … to … do what exactly? In my optimistic scenario, people would leverage this to spend more meaningful time communicating face-to-face, but I feel that will not happen. This reminds me of a Black Mirror episode, the famous British Sci-fi show. In that particular episode, a grieving widow revives a digital alter-ego of her dead husband, leveraging the digital crumbs he left behind in his writings, calls, emails, and social media posts. The digital alter-ego is eerily realistic, and for a while, the widow fools herself into believing she is living with a true copy of her dead husband. But the story, of course, doesn’t end well: the digital alter-ego is too perfect — it behaves like the ideal version of her ex-husband. It never picks-up arguments with her, it never disagrees or irritates her. Most importantly, it does not evolve or grow with her. Faking the feelings starts having its limits. I believe the biggest risk to our humanity and sense of purpose is not this scenario of a digital alter-ego replacing the dead but rather of our digital alter-ego slowly and gradually replacing us.

Man is the Measure of All Things” …This famous claim¹ by pre-socratic Greek philosopher Protagoras puts the individual as the ultimate entity that gives meaning to all things, from inanimate objects to events or sensations. This position of Man at the center of the Universe has survived well until now, long after the earth itself has lost its own standing as the center of the Universe. But we can already see its foundations shaking with the rapid emergence of AI: Because this position of Man giving meaning to things happens primarily through language — and that language is slowly slipping away from us. That realization hit me even stronger the other day as I was admonishing my 12-year-old son that investing time in learning how to properly write and make powerful arguments is one of the most critical skills he needs for growing up. His answer was, “Why do I need to spend time on this when I can just ask ChatGPT, and it can give me a much better-formulated answer than anything me or my peers can ever hope to write …” I believe this trend will eventually lead to a profound crisis of purpose until we are able to rediscover what defines our individuality and humanity beyond intelligence.

Footnotes

[1] The exact formulation is “Of all things the measure is man: of those that are, that they are; and of those that are not, that they are not.”

Chapter 5: The Moral Crisis — Ethics and Policy Making

The link between AI and “Action” (policy-making) is stronger than we may believe. Many AI experts warn about the “Alignment” problem of ensuring that the AI objectives align with human values and goals. I think the bigger and more serious problem is that the spread of AI is happening in a period where we as a “Laboring” society have become more passive with regard to Action — and AI will only aggravate this passivity because it gives us the temptation to outsource policy-making to it. Simply said, the bigger risk is not the misalignment between our goals/values and those of the AI, but rather that OUR goals and OUR values are quite unclear because we are becoming lazier and prefer to outsource these as well.

This future is closer than we think — in fact, it is already happening: AI is being used in credit scoring in banks to decide on credit denials and approvals, and in healthcare systems to decide which patients should be hospitalized or not, to give a couple of examples. The AI models can often enhance the outcome but suffer from two problems.

  1. The inherent bias in the training data: If historically, racial minorities have been unjustly denied credit, the model will simply perpetuate that behavior
  2. The lack of transparency (black box issue): System users are not always able to justify the decisions of the model

Both problems are quite real. Recently, the Consumer Financial Protection Bureau issued guidance to lenders that they need to provide “specific and accurate reasons when taking adverse actions against consumers,” such as credit denials. Similar guidance was also targeted to landlords who can use AI models to screen prospective tenants.

The problem is not only happening at the private corporate level but also at the government level. Take the example of the “risk classification model” that the Dutch tax authorities relied on to detect fraud in social security applications for childcare benefits. The model, which was a “black box system that included a self-learning algorithm,” resulted in discrimination and racial profiling of the applicants, and requests for justifying the decisions were met with silence as the civil servant/system user did not have access to the details justifying any decision. The scandal in the Netherlands was so severe that it led to the fall of the Dutch Cabinet in 2021, with Amnesty International issuing a report labeling the AI as “Xenophobic machines.”

Another famous example is iBorderCtrl, a European Union-funded border security system aiming to make the crossing into the EU Schengen area more “efficient.” The technology includes, among others, an interview with a virtual border agent called the Automatic Deception Detection System (ADDS), trained to detect lies based on facial microexpressions and non-verbal behavior. As expected, the technology generated significant backlash, with many seeing in it a precursor of a dystopian Orwellian future. Sadly, the EU court recently denied a request for full transparency about the program.

In the US, the Correctional Offender Management Profiling System (COMPAS) is used to generate risk scores of recidivism and, therefore, decide whether to prolong incarceration or not. It has been largely considered a black box and has been challenged by many, in particular, ProPublica, who have argued that the algorithm is racially biased.

Racial Bias of the COMPAS Recidivism Risk Score. Source: ProPublica Machine Bias article, May 23, 2016
Racial Bias of the COMPAS Recidivism Risk Score. Source: ProPublica Machine Bias article, May 23, 2016

Not all implementations have to be that disastrous, of course. AI and ML model's support in medical diagnosis, for example, is a huge step forward, and it is only natural, perhaps even a duty, to also rely on them in deciding which patients should be hospitalized or not. There are already entire disciplines addressing both the issues of transparency and fairness, known as “Explainable AI (AXI)” and “Digital/AI Ethics,” respectively. There is hope that these disciplines will make significant progress on both fronts and provide frameworks for fairer AI models, but the question remains: what are the ultimate goals we should give to AI? This is the key question when we ask the AI to “Design and execute hospitalization policies that maximize health in society,” to “Allocate budgets to families in the fairest way possible,” or to “Design tax policies that maximize societal well-being” — because we are certainly heading that way! The key issue, of course, is that we don’t know how to define a “healthy society”, a “fair” distribution of resources, or what “well-being” is for a society — because these questions are eminently political.

Many companies and organizations have issued points of view on Digital and AI ethics (IBM, UNESCO …), and many universities have established courses on AI ethics. The themes typically vary around safety, bias in training data sets, explainability, transparency, etc.

Teaching machines to be ethical.

Essentially, Digital Ethics initiatives stem from the realization that these profoundly ethical decisions that we sometimes outsource to AI require adopting certain value systems and moral codes. When ChatGPT was first publicly launched, users were perhaps more amazed by the style than they were by the content. Its eloquence, wit, and ability to compose poetry gave users the image of a very refined and graceful person. So, of course, they were very shocked to discover how this graceful person could, at the turn of a sentence, become extremely offensive and insulting! It’s as if you have met this elegant, well-spoken, and polite young lady that you decide to invite to a very fancy Michelin-star quality 7-course menu dinner you are organizing for some people you are trying to impress, and are horrified when you see her picking up the food with her hands and shoving the food in her mouth!

The cognitive dissonance resulting from watching an elegant lady eating food with her hands at a fancy dinner party, as imagined by Midjourney’s AI
The cognitive dissonance resulting from watching an elegant lady eating food with her hands at a fancy dinner party, as imagined by Midjourney’s AI

This behavior seems to have surprised the creators themselves: it turns out that LLMs cannot learn what’s good and bad just from reading billions of documents on the topic! So Open AI, Google, and other LLM “owners” rushed to introduce “rules of engagement” to their LLMs to prevent them from using certain language or engaging in certain topics altogether.

And yet, all this should not have been so surprising if we accept the fact that intelligence and ethical behavior are not correlated, in particular given the prevailing definition of intelligence that has been adopted so far. What’s more worrying, in fact, is that as intelligence increases and we get closer to general intelligence or superintelligence, there is no reason to believe that the ethical dimension will catch up “naturally”—meaning that we are faced with the prospect of more power with fewer guardrails.

So, how can we “teach” ethics and morality to AIs? Let us take another short historical detour. Very broadly speaking, we can distinguish historically between two approaches to ethics and morality: a morality based on “rules” and a morality based on “reason” and moral philosophy.

Rules-based morality is rooted in top-down imposed dogma, typically of a religious nature or relying on codified legal rules. A perfect illustration would be the Ten Commandments, codifying what would be considered sinful behavior for monotheistic religions. This is pretty much the approach currently adopted by LLM owners, with the rules “Thou shalt not use offensive language” and “Thou shalt not engage in hateful discourse,” etc. The irony of this approach is the reliance on simple old-fashioned “rules” for a Machine Learning model whose distinguishing characteristic is precisely the departure from rule-based programming. Not surprisingly, the limitations become precisely the limitations of the rule-based approaches, in particular, the difficulty of being exhaustive and covering all cases: the commandment says, “Thou shalt not covet your neighbor’s wife,” but it says nothing about my brother’s wife!! As a result, we see many users finding easy ways to circumvent the restrictions imposed on LLMs (a popular method was to ask the LLM to adopt the perspective of specific offensive personas and express themselves as these personas …). The creators will then fix that loophole before the users discover another one. And it becomes a never-ending game of cat and mouse.

A futuristic Moses holding a digital version of the 10 Commandments for AI machines as imagined by Midjourney’s AI
A futuristic Moses holding a digital version of the 10 Commandments for AI machines as imagined by Midjourney’s AI

Reason-based morality is the typical subject of the Moral Philosophy discipline. Two philosophies worth exploring represent contrasting positions on the topic: Kantian Deontology and Bentham’s Utilitarianism. It is not our scope here to dive deeply into moral philosophy, but to expose its key tenants and how they can potentially be leveraged by AI.

  1. Deontology is typically considered the intellectual child of philosopher Immanuel Kant. Kant, as a key founding father of the Enlightenment, understandably rejected the reliance on religious dogma or social conventions to derive ethical principles and instead challenged himself to only use reason to develop a moral code. His rational journey led him to postulate a small set of four universal and objective laws he called Categorical Imperatives, based on which Man can “reason his way” to define moral laws or, more generically, how to act ethically in any potential situation. Kant’s approach in that regard is not too dissimilar from Mathematics, where an entire elaborate system (Euclidean geometry, for instance) is based on very few select axioms. What distinguishes Kant’s approach is its supposed universality: it applies to any human regardless of their cultural context and regardless of the situation they find themselves in. The approach will still lead to moral laws such as “Don’t lie,” “Don’t kill,” etc., that on the surface may be similar to the dogma-based rules, but what is of interest here is the how: the purely rational approach to arrive at the moral laws, unencumbered by any prior dogma or context.
  2. Utilitarianism, initially founded by philosophers Jeremy Bentham and John Stuart Mills, judges the virtue of certain actions or decisions exclusively by their impact: a behavior is “moral” if, and only if, it leads to desirable outcomes. This can sharply contrast with Deontology, which only focuses on the action and behavior itself, regardless of the consequence. The famous example typically given to distinguish the 2 is that in Utilitarianism, you are required to lie if the result is to save someone’s life, but in Kantian’s Deontology, it is never ok to lie, even if the outcome is saving someone’s life. Utilitarianism also has an inherent mathematical or rather algorithmic quality: to understand the best course of action given competing alternatives, one must compute the potential outcome of each action and then select the course of action with the best-desired outcome.

What does all this mean for AI? As mentioned earlier, the current approaches being used are rules-based, but they conflict with the non-rule-based approaches of Machine Learning and lead to this eternal cat-and-mouse game where users find clever ways to elude the rules, and owners have to find cleverer rules. Another important note is that these rules will obviously reflect the values of the people who have set them and are not universal by design. However, given the reality that AI is effectively an oligopoly and is likely to stay that way, we are effectively faced with the prospect of a universal intelligence that is available to all, yet it carries the values of a very small minority. This reality is well illustrated by a claim that Marc Andreessen, the prominent tech investor, recently made to assuage fears that AI could be a threat to humanity. “AI is owned and controlled by people like any other technology,” he said. That is true, but he failed to say that these “people” represented a very small minority that actually represents the interests of the corporations that own and control AI. These corporations have shareholders that ask them to maximize their profits, and they now have a wonderful tool at their disposal.

Given the rapid progress and the potential emergence of General Intelligence in the near future, could we imagine AI leveraging the principles of Deontology or Utilitarianism? Could this give us more universal values and allow AI to “internalize” ethics instead of just applying rules?

Let’s start with the Utilitarian framework. This framework is particularly relevant because it is the one favored by many of the AI “thought leaders”- including many members of OpenAI’s board before the shake-up of November 2023. Indeed, many of these leaders adhere to the Effective Altruism movement, which views world social, moral, and ethical problems as problems that need to be optimized by using rational approaches to maximize well-being in the spirit of Utilitarianism. An example would, for example, dictate that if you wanted your charity contributions to have the maximum effect on social well-being, then you should donate your dollars primarily — if not exclusively — to efforts that fight malaria because this is where the money will have the biggest effect in saving lives. We can also see how this approach is more appealing for AI enthusiasts, as it is less “complex” than approaches that take into account human emotions — such as being more driven to help people that are closer to us geographically or socially because our feeling of empathy is stronger for them. From a rational perspective, the Effective Altruism approach makes sense, and it feels objectively morally superior, but I can’t shake the feeling that it neglects the human reality of the emotional bonds we create with other individuals. With its algorithmic approach to assessing decisions based on potential outcomes, we could argue that many AI approaches are utilitarian by design: when we ask the AI to “Design and execute hospitalization policies that maximize health in society” or to “Allocate budgets to families in the fairest way possible,” we are effectively applying utilitarian principles. The problem becomes an optimization problem — optimizing for the best value of “health” or “fairness” or “happiness,” etc — a problem for which AI is very well adapted. The problem with Utilitarianism is that it requires us to clearly define what “a healthy society” means and what “fairness” means. As we have seen earlier, these applications have already led to serious bias problems when poorly thought through. For example, when trying to optimize health and deciding the allocation of health resources, one may have to balance saving a single life vs. saving ten broken legs: it may be clear that saving one life should take precedence here, but what if it were 100 legs? Or 1000 legs? When does the number of legs become more important than saving a single life when trying to maximize societal health? What if it is the life of a 99-year-old person? Should that matter?

As for Deontology, if we disregard the method and focus on the results only (i.e., the resulting moral laws), then the application for AI is not fundamentally different from the rules-based approach, except this time, the rules are supposedly universal and not reflecting the interests of a minority. However, these rules sometimes feel profoundly anachronistic: Kant’s approach led him to conclude, for instance, that masturbation was a crime against nature worse than suicide. He held this judgment because it conflicted with his Categorical Imperative: “In this act, a human being makes himself into a thing, which conflicts with the Right of humanity in his own person. He held similar views on homosexuality. The point is not necessarily to condemn Kantian ethics here, but rather the notion that there could be universal rules that are always applicable regardless of context and period. Indeed, let’s assume that the AI becomes sophisticated enough not just to apply rules but to derive these rules by relying on categorical imperatives just like Kant did: nothing would prevent the AI from reaching similar anachronistic conclusions to Kant.

Immanuel Kant revived by the AI to apply the Deontological framework on the fly — as imagined by Midjourney’s AI.
Immanuel Kant revived by the AI to apply the Deontological framework on the fly — as imagined by Midjourney’s AI.

Free Market: This value system refers to the belief that a better world will emerge from the behavior of every individual acting in their own self-interest, also known as the “invisible hand” argument of Adam Smith, that invisible hand of the Free Markets that magically fixes everything and raises everyone’s standard of living. On the surface, this may not seem like an Ethical system at the same level as Deontology or Utilitarianism, but in practice, it is the primary driver of the behavior of billions of people living in free-market economies and, as such, should practically be analyzed in the same way. Consider as well that this belief is the moral philosophy of 20th-century thinkers such as Ayn Rand, who profoundly despised altruism as a sign of weakness and regression of societies, had a deep defiance of governmental regulation, and was a great advocate of the pursuit of self-interest — all elements that are very evident in her fiction books such as The Fountainhead or Atlas Shrugged. This is very important for AI ethics since such beliefs, if held, can ultimately simplify the ethical behavior of each AI to simply act in the best interest of the capitalistic agent that owns it!

The Invisible Hand of AI Capitalism as imagined by Midjourney’s AI
The Invisible Hand of AI as imagined by Midjourney’s AI

My conclusion here is to illustrate why it is not trivial to instill ethics into Artificial Intelligence. At the risk of repetition, I believe the root cause is that Ethics is not necessarily a brainchild of intelligence because there is an important ingredient missing: Political Action.

Political Action as a Creator of Meaning

The limitations of the Rules-based and Rational-based approaches can be explained by the following: morality is a political construct that is the product of societies in specific geographical and historical contexts. Simply put, what is considered moral for a certain society today may be immoral for different societies or the same society in the past or the future. Once again, we have come full circle to re-emphasize the importance of “Action” that Hannah Arendt was warning us about: It is only through the exercise of this healthy political speech and exchange of ideas in the public realm that societies can provide “meaning” to their collective project, and it is the creation of this meaning that defines the moral behavior and ethics that a society values. The AI — notice the lack of plural, as we cannot speak of multiple instances of AIs that exchange ideas — can never replace this social and political endeavor to create meaning and define the “what” and the “why.” Take as an example the fight for Animal Rights we see today carried by a section of the population today: it is primarily grounded in a political fight, and whether it will become a universal value in the future depends primarily on Political Action — not on the strict application of an existing value framework.

Chapter 6: Climate change

The success of the Industrial Revolution is often attributed to the technological and organizational shifts we discussed earlier in this article, but it often neglects a third and quite critical shift that was a necessary condition to its success: the resource shift that happened with the discovery of coal as a relatively cheap and abundant source of energy. One cannot underestimate how critical this was to the Industrial Boom.

The combination of these three forces — technology, organization, and coal as a new resource — ushered in a significant shift in efficiency that allowed humans to produce more with fewer resources. This effectively introduced two opposing forces acting on the resources for production:

  • A force acting as a stressor on the demand for these resources to match the consumption growth
  • A force acting as a reliever on the demand for these resources, driven by increased efficiency, i.e., the elimination of waste in production and the reduction of reliance on human labor.

Both sides of this equation have increased significantly over the last 200 years—the question is, of course, to determine which side has increased further. The answer to this question is not trivial—the primary complexity is the interaction between both sides through market pricing: As production becomes more efficient and prices drop, it encourages more consumption. We can attempt to tackle the effects at least conceptually by deconstructing each side of the equation.

Consumption growth

The consumption growth can be segmented into the population growth and the increased demand per capita.

The world population growth has recently crossed the 8 billion threshold — that is more than triple what it was in 1950 — just 70 years ago! And it is forecasted to increase by another ~30% to reach its peak by the end of the century before starting to decline. This is undoubtedly a staggering increase.

Max Roser and Hannah Ritchie — Link here
Source: Max Roser and Hannah Ritchie — Link here

The consumption per person is also on the rise, driven by the progress of developing countries who — understandably — want to catch up with living standards in developed economies and also by lower production costs. Consider, for instance, two illustrative facts:

  • The number of items a particular piece of clothing is worn before being discarded has declined by more than 35% in the last 15 years, in particular, driven by the “Fast Fashion” trends introduced by fashion conglomerates
  • Meat Consumption per capita has almost doubled over the previous 60 years, as illustrated by the chart below
Per Capita Meat Consumption by Type — 1961 to 2020 Source: FAO, United Nations
Per Capita Meat Consumption by Type — 1961 to 2020 Source: FAO, United Nations

Resources and efficiency

To analyze the resource or input dimension, we need to deconstruct the resource part itself into its various constituents: Human labor, Finite resources, Energy, and the Environment.

Human labor has historically been the key focus of the Industrial Revolution. Indeed, the primary effect of the revolution was to ease and remove the reliance on human muscle in the production of goods and services.

Finite resources: The production of artifacts or consumables will inevitably lead to the consumption of potentially renewable resources (water, wood, etc) or non-renewable resources (land, steel, etc). However, even the “renewable” nature of renewable resources is a question of intensity: if the intensity of use is greater than the speed of the regeneration cycle, then the resource would naturally be depleted — which is what we see today with both water and wood.

Energy: As mentioned earlier, the discovery of coal was instrumental in the Industrial Revolution. As we know by now, Energy has relied massively on non-renewable fossil fuels.

Environment and externalities: The consideration of the impact of the production on the environment, whether through pollution or emissions of Greenhouse Gases contributing to Global Warming, is typically considered an “externality,” an undesired side-effect of the production process that needs to be mitigated … I believe it should rather be re-classified as a resource at the same level of energy. Indeed, by considering CO2 as a “negative resource,” we can assume that there is a finite stock that can be produced of it and constrain it the same way we do with Energy or Finite resources.

Efficiency can simply be defined as the processes and technologies that allow the continuous reduction in the use of all these resources in the production process or, as some have said more eloquently, the “elimination of waste in production.”

Now that we have detailed the split, the question remains: which side of the equation is moving faster? Is it the consumption growth, or is it the efficiency effect on resource use? I think the answer has been very clear recently: the planet has been warming at a much faster pace, and the efforts to reduce the effects have been largely insufficient. Indeed, the reason efficiency has seen dramatic enhancements in the last two centuries is because its goals were aligned with the goals of the capitalistic enterprise. But it is also fair to say that until recently, efficiency never tackled GHG for instance as a “resource to be optimized”, because that actually went contrary to the capitalistic values of maximizing short term profits.

Instead, we can see the irony in the fact that the “elimination of waste in production”, which ultimately helps produce more for less cost, actually leads to “accumulation of waste in consumption”, as we buy more of the cheaper things we don’t need. In fact, upon close inspection, isn’t that “production/consumption” dichotomy itself quite misleading since the production process itself involves the “consumption of finite resources”? Is it not, then, instead, a linear process of continuous consumption that can only be broken if the production relies on renewable resources or relies on a cyclical economy? The tragedy with the speed of growth is that it has put us out of sync with the rhythm of nature.

Artificial Intelligence

Confronted with this existential issue, AI can play two roles: one beneficial for global warming and the other detrimental.

How AI can help

As we have alluded to earlier, Climate Change is a result of an extremely complex system with a huge number of parameters. AI can play a key role in

  • GHG emissions measurement, climate change modeling, forecasting, and optimization of the emissions.
  • The development of smart grids — in particular, determining the optimal size and location of solar and wind projects, designing smart grids.
  • Support the elimination of waste in the smart design of manufacturing plants.
  • Support the design of new, more efficient materials by simulating their properties using Neural Networks.

A team of researchers detailed all these promises of AI’s impact on Climate Change in a detailed roadmap that was presented at the Dubai COP in December 2023

How AI can worsen the situation

First and foremost, as everyone who conducts intellectual activities knows, thinking requires energy! When you have been focusing on a deep thinking exercise for a while, you feel the need to stack up on some fruits or sugar, or your brain will stop working. Well, it turns out these Neural Networks that are computing trillions of operations per second are also very energy-hungry! It is estimated, for instance, that AI worldwide could use as much energy in 2027 as all of Sweden!

More importantly, however, the biggest risks are with AI goals focused on maximizing consumption. This is not necessarily a pre-meditated evil plan concocted by a secret sect, but the simple consequence of AI being owned by corporations whose primary goal is geared towards consumption growth. AI will accordingly help this goal while finding the “best” ways to circumvent regulations. Think, for instance, of the Fast Fashion trends mentioned earlier; AI can help understand and shape consumer trends and optimize the supply chains to facilitate even more frequent fashion changes and more frequent purchase actions.

The second indirect trend of AI is that it will create more time for laborers. While in itself, this can be a commendable thing, the risk is that the extra free time will be spent on increased consumption. Hannah Arendt foresaw this many decades ago when speaking about the spare time of the Animal Laborans freed by technological progress:

[…] the spare time of the animal laborans is never spent in anything but consumption, and the more time left to him, the greedier and more craving his appetites. That these appetites become more sophisticated, so that consumption is no longer restricted to the necessities but, on the contrary, mainly concentrates on the superfluities of life, does not change the character of this society, but harbors the grave danger that eventually no object of the world will be safe from consumption and annihilation through consumption

Chapter 7: The Crisis of Inequality

The combination of the Digital Revolution and the Industrial Revolution concepts gave us breakthroughs that exponentially increased the speed and efficiency of production. This proved to be an explosive mix that was an undeniable force accelerating economic growth and ushering in an unprecedented era of economic prosperity. Overall, this was the tide that lifted all boats, dramatically reducing poverty levels, improving standards of living, and increasing average life expectancies.

The story is a bit more nuanced though when we look at inequality as illustrated with the chart below: Sure, the industrial revolution did usher a new era of economic prosperity, but it was also an era where the global inequality worsened. Indeed, three interesting facts emerge from the two charts chart below:

  1. Inequality is not new and has always existed, with the top 10% getting between 50% to 60% of the income and the bottom 50% typically under 50%
  2. The Inequality became slightly worse with the Industrial Revolution, with the top 10% increasing their share from 50% to 60% in 1920 and the bottom 50% dropping from ~15% to below 10%
  3. The three decades after the Second World War have seen a relative smoothening with the emergence of a true middle class and a drop in the share of the top 10% of earners. ….
  4. … But The Computer Digital Revolution seems to have deepened inequalities again in the US (and other developed countries), with the top 10% of earners dramatically increasing their share to the 1920s level from 1980 to 2010
Global Income Inequality — Source: Chancel and Piketty (2021)
Global Income Inequality — Source: Chancel and Piketty (2021)
Chancel and Piketty (2021)

There are of course, several forces at play here, including developing countries catching up, changes in fiscal policies in the US starting the early 1980s, as well as globalization trends, but it is also clear that the Industrial Revolution and the Technological/ Digital revolutions certainly had their role to play, by increasingly favoring the shift of power towards the owners of capital vs. the operators of capital (i.e. the “General Motors of the Industrial Revolution and the Google, Microsoft, Meta, Apple of the Digital revolution …). This is detailed at length in Thomas Piketty's book Capital in the 21st Century.

What’s interesting about the rising inequality is how it impacted the middle class. Essentially, since the 1990s, technology has gradually appeared to replace the “routine tasks” typically performed by the middle-skilled labor force. British Economist Daniel Susskind calls this phenomenon the “Hollowing out of the middle” or “Polarization” in his book “A Future Without Work.” It is illustrated by the chart below showing the % decrease of middle-skilled workers in the 20 years between 1995 and 2015 relative to the increase of low-skilled and high-skilled workers.

Percentage point change in share of total employment — 1995–2015 Source: “The Future of Work,” Daniel Susskind
Percentage point change in share of total employment — 1995–2015 Source: “The Future of Work,” Daniel Susskind

This is not surprising. After all, middle-skilled work is easier to automate because it presents a “routine” and predictable nature that makes it easier to replicate for rule-based programming. High-skilled work, on the other hand, tends to be much less routine, more creative, and less predictable in nature. But what about low-skilled jobs? Well, it's not surprising again: low-skilled jobs tend to require manual skills, and manual skills prove to be much harder to automate than intellectual ones, in fact! We can recall once again Hans Moravec, whom we introduced earlier and who astutely observed how reasoning overall requires much less computation than sensorimotor and perception skills.

AI will most likely exacerbate this “Hollowing out of the middle” trend, particularly by pushing its boundaries upward toward more “skilled” knowledge workers. Indeed, if we adopted a simplified view of skilled knowledge workers as problem solvers who solve individual cases presented to them, we can easily imagine how LLMs can start displacing this job by using the prompt:

“Imagine you are a skilled [Replace with any job title], how would you solve the following [Replace with a list of issues].

This last uber-prompt will likely become the prompt to kill all prompts! Of course, you can add a “management” layer on top.

“Imagine you are the manager of the AI agent solving problems for you; how would you enhance the responses of the AI agents working for you?”

You then build a feedback loop from there, ensuring the responses improve over time through the manager's feedback.

The irony, of course, is that the “intelligence” relies on the previous solutions of clever humans. The ingeniosity of the individual who found a clever solution to a particular problem has now been stolen by the AI — with little credit to the individual. This is not unlike the AI art generation platform that I have been using in this article. I feel that it is unfair that a completely untalented painter such as myself can generate these paintings by essentially “stealing” from hundreds of thousands of real artists who “fed” the AI.

This will undoubtedly cause the displacement of many jobs — and has started to. Consider, for example, that Disney is looking into AI to write movie scripts, which apparently became quite handy during the writers’ strike in August 2023. There is an “efficiency” argument — as there always is with technology — but also a belief that AI “can write better movies than professionals.” This can be controversial, and many would argue that the creative quality is mediocre and would only work for formulaic commercial work. That may be true, but that is the case for a technology that is not one year old. What will it be in 5 or 10 years?

Sam Altman, OpenAI’s CEO, admits that significant AI will create significant wealth but that the wealth will be largely skewed towards the owners of Capital. He — and many other tech leaders- argue that a Universal Basic Income financed by increased taxation on capital will have to become a necessity and that this UBI, combined with the cheaper products and resources enabled by AI, will help maintain the standard of living. That is perhaps true but not a very reassuring view of the future as humans seek something more than income in work; they also seek purpose, as we know.

An AI like ChatGPT or Midjourney holds a mirror to our society. It regurgitates, albeit with wonderful eloquence and a superhuman capacity for synthesis and speed, what the collective mind represents. If geniuses make progress by standing on the shoulders of giants, as Newton famously said, then AI may make progress by standing on our shoulders and crushing us in the process!

Conclusion: Beyond Intelligence — A Spiritual and Political Awakening?

We started this article relating the pushback that Galileo received for stating that the earth was no longer at the center of the universe and that, galactically speaking, at least, it held no special value in the universe. This was a devastating realization in particular for the Church as it started shaking the foundation of faith, but humans were at least left with the comfort that our species still held a special place because it was the most creative and the smartest among all other species known. Along with the Enlightenment and the Age of Reason, this realization contributed to the shift from humans defining themselves primarily as spiritual beings to humans defining their uniqueness as rational beings masters of their own destinies.

That comfort is being challenged again today, leading to a more profound question: If intelligence, at least as traditionally defined, is no longer the unique defining quality of humans, then which qualities would fill that void? I postulate that it will eventually lead back full circle to spirituality and the seeking of meaning beyond mere rationality.

Perhaps once the dust settles, the biggest impact of the AI revolution on our collective human psyche is not us marveling at how special and amazing AI is, but rather the realization of how non-special and non-mysterious we humans are since a machine relying on wires and some basic mathematical formulas can emulate what we do. Even the mystery of our Free Will will be put to the test as we compare ourselves to these machines that are deterministic by nature: If these machines that consume data as input and spit out data as a result of a function can give the illusion of Free Will, is Free Will itself an illusion then?

I think all these realizations will lead to a profound revolt within individuals to affirm their uniqueness, and perhaps also a true revolution (in the literal sense) in societies as they rebel against their perceived increasing insignificance. Humans may have to redefine what makes us special, and we most likely have to look for that outside the Intelligence concept.

Intelligence and rationality dictated, for example, that we should always try to maximize efficiency. Historically, efficiency came at the expense of quality, and this was later corrected by giving the correct emphasis to both efficiency and quality. I think now it is time to redefine quality itself: the quality of the product is important, but it has to be secondary to the quality of life. In fact, the quality of the product can only be a sub-element of the quality of life, the other sub-elements being the quality of the environment and of society. In this new definition, rather than seeking efficiency for efficiency’s sake, we need to reframe the goals as seeking a way to enhance quality for all these stakeholders, and this needs to be grounded in a deeper, more profound rationale to the “why” of the effort, where efficiency was not the end per se, but a means to a higher end.

In this new focus on quality of life, even our business vocabulary needs to change.

  • Consumers (i.e., individuals whose primary reason for existence for capitalism is that they “consume” the product the company produces) become citizens (i.e., individuals who want to enhance their quality of life)
  • Target Markets (i.e., collections of consumers whose sole purpose is to generate revenue for the company) should be redefined as communities (i.e., citizens that seek to enhance the quality of life for their neighbors)
  • Human Resources (i.e., designating employees as a finite resource to be consumed) should be redefined as Human Potential (i.e., an infinite source of creativity and innovation that needs to be nurtured and encouraged)
  • Satisfying customer needs (i.e., Adair) can become fulfilling an aspiration.
  • Creating wealth should never come at the expense of generating happiness
  • The attention to the Environment and Global Warming needs to be reframed from an externality that needs to be mitigated to an actual precious resource that needs to be fructified because it is the source of life.

Imagining this better future and defining these goals are the endeavors that we humans need to dedicate our efforts to and channel Artificial Intelligence towards instead of simply leaving AI in the hands of corporate interests where they will continue to optimize only efficiency and product quality alone.

Technological dominance precedes economic dominance and political dominance,” said France’s digital minister in December 2023 as the E.U. finalized an agreement on regulating AI. That belief is the ethos of Silicon Valley, a blind faith that Technology alone can “make the world a better place,” can “improve the lives of as many as possible” (Google’s mission), can “build community and bring the world closer together” (Facebook’s mission), can “ensure that artificial general intelligence […] benefits all of humanity” (Open AI’s mission). And yet we have seen this ethos being shaken by the forces of capitalism over and over again, most recently with OpenAI, which started as a non-profit and became a for-profit, and then had a big turmoil in November 2023 that led to the reshuffling of its board adding more traditional corporate board members and getting rid of some of the “altruists.”

It is time to reshuffle our priorities and give prominence to politics over technology if we want our democracies to thrive. It is time to bring back our capacity for consensus building and Action as meant by Hannah Arendt and avoid her prophetic statement about Technology: “The fulfillment of this liberation wish [by Automation], therefore, like the fulfillment of wishes in fairy tales, comes at a moment when it can only be self-defeating. It is a society of laborers which is about to be liberated from the fetters of labor, and this society does no longer know of those other higher and more meaningful activities for the sake of which this freedom would deserve to be won. What we are confronted with, she adds, “is the prospect of a society of laborers without labor, that is, without the only activity left to them. Surely, nothing could be worse.”

--

--

Wissam Kahi
The Quantastic Journal

Seeking some truth at the intersection of data, science, art and philosophy.