AI needs humanities, right now.

Bruno Kunzler
ZERO42
Published in
10 min readJul 22, 2020
To robot or not to robot, that is the question.

If you think artificial intelligence is not for you, read this.

It starts in our language

It’s often confusing to describe what artificial intelligence is. While there are various concepts out there, one might try to make sense of it through etymology. Where did the words themselves come from?

Let’s start with intelligence. It comes from two latin words: “inter” and “legere”. Inter means “between”. Legere, “choice”. Intelligence, thus, is the ability to choose between. Artificial means “relative to art” or “relative to labor”. Combining both, then, we can conclude that AI is something created by labor that can choose between options. While very simple, it tells us a lot about the field. Not only about what it currently is, but about what it was and what it can be.

It is about people

Contrary to common belief, it was not Alan Turing that invented the term “Artificial Intelligence”. In 1950, he gave us the theoretical basis for the field, but it was not until 1956 that those two words were put together.

Held during approximately eight weeks, the Dartmouth Summer Research Project on Artificial Intelligence brought 10 to 20 scientists from various fields together. Marvin Minsky, John McCarthy, Claude Shannon, Nathaniel Rochester and Allen Newell were some of the scientists — mostly mathematicians — that participated in the workshop. There was, however, another important figure among them: Herbert Simon. Unlike other participants, Simon was not a full-blown mathematician. His Wikipedia page describes him as a “political scientist, economist, cognitive scientist and computer scientist”. He was awarded a Nobel in Economics and major prizes in psychology. With this unusual combination, he is now regarded as one of the founders of the field of artificial intelligence.

Although divergent from the common concepts we currently hold about AI, Simon’s interests all encompass one grand field of inquiry, which he perfectly describes:

“I am a monomaniac. What I am a monomaniac about is decision-making.”

To understand his obsession with decision making and what that means for the fields he got in touch with — and especially AI — we need to take a look at some non-futuristic-disruptive-exponential things first.

It reflects history

Humanity did not start coding decision-makers from nothing. We already had centuries — if not millennia — of accumulated knowledge on a very important question: how do humans decide? The humanities are, at least on some level, dedicated to understanding this question. Albeit using different approaches, each one of its branches has developed its own methods to reach models of decision-making.

We first started with very subjective inquiries into the nature of decisions. Is there a God that controls our actions? What are the morals and principles we should follow? Is there a meaning to everything we do? Each of those questions begged further observation of the human spirit, soul or nature, depending on the philosopher of your choice.

But then, as rationalism came crashing in during the late 18th century with the works of men like Leibniz and Newton, we started to develop a more objective approach to human investigations in general. What was once discussed in words started to be modeled with numbers and formulas; what was once the subject of conversations between bohemians or the introspection of solitary men and women found their way into laboratories and controlled experiments. That’s when we stopped being humans and became “rational agents”.

In transforming the structure through which knowledge was created, rationalism also changed the nature of knowledge. Adam Smith’s unknown theory of moral sentiments gave way to Bentham’s hedonic calculus and utilitarianism. Freud’s theory of the unconscious, ego, and superego co-exists now with William James’ laboratories and Skinner’s probabilistic approach to human behavior. Morals and ethics started dividing the stage with individuals maximizing their own happiness and minimizing their suffering. Intuition lost space to structured datasets and efficiency.

In the mid of the 20th century, all those concepts were being discussed, merged, neglected and accepted by different fields. Born around 1950, AI is one of the children of such shifts in the structure of human knowledge. And, thus, embodies one main paradox that was brought forth by those changes: is it better to optimize decisions or to humanize them?

Bonus! A very short concept about AI development

I’ll try to summarize one core concept of AI really shortly. There are hyperlinks scattered throughout, so dig on if you are into it. The development of AI is currently divided in three levels:

  • Narrow intelligence (an AI that performs a limited number of tasks very well);
  • General intelligence (which has the same level of intelligence as humans);
  • Super intelligence (which is more intelligent than us).

The paradox of optimization and humanization can be seen in each one of those phases: more practically for the first one and still theoretically for the others.

It is a need of the present

Optimizing biases?

PredPol and HunchLab are two different companies that try to predict crimes. Their algorithms run in a certain city — Los Angeles or Chicago, for example — where they learn about crime patterns. Their objective is to help the police by predicting the areas in which future crimes have the greatest probability to occur. Since its only task is predictive policing, it is considered a narrow kind of AI. The algorithm estimates the probability of crimes in each area of the city by analyzing various indicators, such as past crimes, complaints, income level, weather, moon phase, etc. When you do that for the whole city, the resulting map has “hotspots” marked on it. Those will be most surveilled by the police.

While this sounds like a great idea, misunderstanding the environment in which AI acts can have bad consequences. A prominent critique of these algorithms is that they do not predict crimes, but only where police will go next. Since the presence of a policeman increases the possibility of spotting a crime (regardless of location), the same hot spots will always be reinforced: when you send cops to some place, they will spot crimes. Those crimes will be fed into the dataset, making it more prone to mark that same area as a hotspot. When the area is colored, more cops are sent to the place and spot even more crimes. A cycle that is reinforced daily.

What makes it worse is that, when police stations start using those algorithms, they feed them with the data they already have. And we all know that there are race-related issues with policing in the Western world in general. So, the first hot spots will be marked in neighborhoods that have more crimes, which is the same as saying that they will be the poorest and those with most minorities. Without proper knowledge about the environment in which the AI is being deployed, we will not optimize policing, but racism and prejudice.

All those problems make it very clear that there is a need for human rights activists, privacy experts, sociologists and political scientists to get closer to this discussion and actively participate in the development and deployment of such algorithms. Which is a need already understood by major companies and organizations — such as Black in AI and Palantir — that tends to grow in the next few years, as we observe the conflicts created when we try to optimize a human environment.

It was questioned in the past

Nature or nurture?

Emulating the same discussions we had about our own minds during the last century, there’s a growing concern about how much of an algorithm should be learned through its lifetime and how much should be innate. Yes, the debate of nature versus nurture.

This might seem like a far-fetched question, but it has both technical and political implications for AI. This debate was first brought by Yann LeCun, a computer scientist at NYU and director of Facebook Artificial Intelligence Research (which came to machine learning because of philosophy and language), and Gary Marcus, a psychologist at NYU and founder of Geometric Intelligence (a startup now owned by Uber’s AI group). Both researchers base their discussions on knowledge generated in areas such as cognitive psychology, children development and learning, language acquisition, and philosophy.

Yann LeCun argues that much of the success of modern AI techniques can be traced back to the fact that they do not consider any presumptions about their data. That’s when they can learn the most and more efficiently, and create their own assumptions about the world. But Gary Marcus paints another picture. For him, giving AI a “richer set of primitives and representations than just pixels” is key for us to achieve real general intelligence. Concepts such as objects, sets, places and spatial-temporal continuity could be innate to our machines and help them build up knowledge faster and more reliably.

Creating — or not — a nature for machines is an interesting question, from a political standpoint. Going a bit further into the future, will broad concepts such as “objects, places and temporal continuity” be our only concerns? Or are we going to venture into something a bit more delicate, including values and moralities? If so, would they be categorical imperatives, as Kant suggested? How much would we learn about our own values while trying to emulate them? And, apart from that, what happens if a robot has no nature at all, and it learns every single concept from its environment? Who is responsible for the robot? His owner? The “family” with whom it “grew up”? Society as a whole?

All those questions are both technical — “will it be more efficient?” — and political — “what is the nature we must create?”. And all of them show how valuable it is for the field to understand the intricacies of a simulated agent acting in a human environment.

It is a matter of the future

What is a human-aligned AI?

And now, for the final example, let’s venture ourselves into the crazy world of Artificial Super Intelligence (ASI). Although a very futuristic problem, there are already serious research institutes tackling it. The Machine Intelligence Research Institute (MIRI) is one of them. In 2017 they released a technical agenda, from which I’ll bring forth two problems that were pinpointed.

The first is a problem called “Vingean Reflection”. Just a tiny bit of scientific history before we start: in 1965, Irvin John Good wrote a paper in which he described the achievement of smarter-than-human AI through an “intelligence explosion”. That would be the scenario in which a simple machine creates, by itself, a more complex one. Enhancing complexity and intelligence in each interaction, they would finally arrive on ASI. If that is a possibility, we must then pay a lot of attention to that first simple machine. Because, even though we are sure that it is aligned with our own values, we cannot be sure that the machine it creates — which is more intelligent than itself — will still maintain the initial alignment. The mere fact that the father is “dumber” than the child makes it impossible for the father to pre-compute all of his child’s actions, which forces the father to analyze his child only abstractly. That is the Vingean principle.

With that in mind, we have the problem of “Vingean Reflection”: “How can agents reliably reason about agents which are smarter than themselves, without violating the Vingean principle?”. How can we predict what smarter-than-humans machines will do, if the mere fact that they are smarter than us incapacitates us to predict them?

The difficulty in arriving at a satisfying answer for the Vingean Reflection makes the next problem a bit more interesting. So, imagine that we get something wrong and end up creating a machine that is not aligned with our interests. We should, then, be able to fix it. But why would something smarter than us let us fix it? It’s the same as thinking about a rat fixing humans because we experiment with them.

Even though more intelligent than us, such agents must reason as if they were incomplete, flawed and potentially dangerous: this is the corrigibility problem. In other words, how can we create an agent that feels insecure about its own actions? And this begs the question: what have the decades of mass media and advertisement taught us about creating insecurity, the feeling of incompleteness and the need for self-development? How could we emulate that to AI? Which symbols, language, concepts or stories could we use to create such an environment?

Humanities, rejoice!

These are just three brief examples of how the field of AI actually uses inputs from the humanities to grow. But there’s a lot beyond that. The pool of opportunities and issues that AI is already influencing is huge, and there are not near enough people working and thinking about them.

What we propose here is a refreshed way to look at the development of artificial intelligence. Neither objective or subjective. Neither past nor future. The people, the words, the problems and the history of AI clarify the importance of intersections. Its problems will only be correctly solved by taking a more inclusive approach to knowledge.

Biology is already tackling this issue, but the humanities are lagging behind in this debate. So grab your master in economics, your PhD in linguistics, your interest in political science or your years as a psychologist and help the world solve some of its future challenges. Even if you do not care about machines, remember that they operate in a human world. And that this is an exploration that might not only help the field of artificial intelligence, but also reveal a lot more about humanity itself.

Disclaimer: This text was written two years ago. I published it exactly as it was back then, with the same links. That is why all of them will be outdated. Nevertheless, the need for more humanities on AI still exists, and is now even more prominent than it was in 2018.

(in portuguese): Se você fala português, fica tranquilo que logo logo esse texto sai na nossa língua também. Enquanto isso, dá uma olhada no Hey, AI que a gente do zero42 fez em conjunto com a Nama. E não dá pra não agradecer à Beatrys, Eduardo e Lucas, que me ajudaram a parir esse texto.

--

--