Artificial Intelligence — An Oxymoron?

There’s an uncritical belief in what algorithms are capable of. We need to critically study the algorithmically processed data streams that make up the foundation of AI.

— Jakob Svensson, professor in media and communication studies, Malmö University

When everyone is trying to get on the AI train, it is important that we critically study the algorithmically processed data streams that make up the foundation of AI.

Today there seems to be a belief that through digital technologies, we can objectively quantify and track all types of human behaviour and social interactions. The separation between human and machine is believed to collapse when only we have enough data and computing power to upgrade our old algorithmic processor (i.e., the body). Homo Sapiens will transform into Homo Deus (see Yuval Noah Harari’s book with the same name), or should we rather say Homo Datus?

Algorithms that scan large amounts of data (big data) are considered to reveal patterns we did not even know existed. We are thus provided a full understanding of the world — an understanding that is free from human distortion and contextual circumstances.

Algorithms are believed to be able to create insights from all of our data traces, and from these create an image of the world in full resolution. Consequently, it is also possible to make accurate predictions about people’s behaviour. If algorithms understand us better than ourselves, wouldn’t it be logical to let them make decisions for us?

Predictions are no longer a matter of probabilities but rather about flawless forecasts. Individuals may be arrested as a preventive measure because their processed data exhaust indicates that they will commit crimes in the future.

All this can, of course, be challenged. The assumption that algorithms objectively compiles the data makes one think of positivism — the idea that the “messy” social world can be studied through natural science methods and that empirical observations can lead to predictions.

For positivist researchers, our data in combination with the data-processing algorithms offers the opportunity to calculate and compute society and culture. But is it really possible to reduce people to mathematical calculations? (See Jonna Bornemark’s Det omätbaras renässans.)

Yet another issue is that the term data is rarely defined. This results in an uncritical belief in what algorithmically processed data can achieve. Etymologically, data means “what is given”. But our data is not given, it is rather taken or extracted. It would etymologically be more accurate to talk about information that has been transformed to ones and zeroes in order for it to be processable by algorithms. Data is also coloured by societal values and norms. Data is thus not neutral. If we know that data has these inherent prejudices, how can we consider the algorithms’ predictions to be reliable?

To be human is to be incomplete and disorganized. Can this messiness be captured through AI? Maybe there are things that cannot be represented by algorithmically processed data. Perhaps this is the reason why we feel alien to our “datafied” alter egos when they show up, for example, in targeted advertising. We recognize ourselves, but not really (the so-called uncanny valley). We become an army of Saga Noréns (or Sonya Cross in the US; Elise Wassermann in the UK) from the popular TV series “The Bridge” (“The Tunnel” in the UK): Direct, unambiguous and consistent with all kinds of laws and regulations, but also slightly inflexible.

Algorithms need data, but everything about the world is not given. Considering the trajectory AI is currently on, there is no reason to believe that it can fully replicate people in the near future. Today, AI is vertical (executive) while people also think horizontally (creatively). Human-computer competitions show that teams of people and computers solve problems better than teams comprised of only computers.

The notion of intelligence should include random, creative and unruly elements — elements that vertically oriented data-processing algorithms seem to find troublesome. It is thus unlikely that algorithms will be able to acquire the horizontal qualities that are needed to solve the problems humanity is facing. And if they can’t, isn’t then artificial intelligence an oxymoron?

This article was originally published as “Artificiell intelligens — en oxymoron?” in Swedish daily Svenska Dagbladet.

Jakob Svensson is professor in media and communication studies at Malmö University where he is affiliated with the research environments Medea and The Data Society Research Programme: Advancing Digitalisation Studies. Svensson also leads the project Behind the Algorithm, funded by the Swedish Research Council. He is currently a guest researcher at the Weizenbaum Institute for the Networked Society, Berlin, Germany.

A Research Lab for Collaborative Media, Design, and Public Engagement (

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store