Beyond the Social Dilemma

Lauren Waardenburg
KIN Research
Published in
3 min readSep 27, 2020
Courtesy to Christine Brauckmann

On my lazy Sunday afternoon, I watched the new Netflix documentary The Social Dilemma. I have to admit, I was nudged to watch it by one of my Social Media platforms, no further comment. It goes beyond saying that I’m extremely impressed by all the collaborators (Tristan Harris, Shoshana Zuboff, Cathy O’Neill, and so many others) who’ve had the courage to start this debate a couple of years ago and who continue tirelessly. However, after finishing the documentary — and having the immediate urge to remove every Social Media-related app on my phone and then realizing that it would be easier to just throw away my phone altogether and then thinking the better of it — I could not help but wonder:

Is this really all we have to say?

One of the main themes of the documentary was the extremely advanced nature of the machine learning algorithms used by, for example, Facebook and Google. As a researcher studying how artificial intelligence (AI) is used in practice, I am continuously struck by how we, as society, talk about and accept that AI is difficult or impossible to understand. In this documentary, there were references to, for example, “very advanced algorithms”, algorithms that have a “life of its own” and we could read the quote by Arthur C. Clarks:

Any sufficiently advanced technology is indistinguishable from magic.

But what does it mean if we talk about and accept AI to go “beyond human understanding”? And what does it mean if we proclaim that companies such as Google or Facebook use these technologies to manipulate their users?

It means that whatever is related to the technology is not the responsibility of its users.

It means that we legitimate our own inability to act.

By referring to technology as unintelligible, policymakers can limit their attention and regulation to what is necessary to develop these technologies, the extremely vast amounts of data and how these data are collected. In addition, we can also have debates about how to develop technical solutions to make AI more explainable or interpretable. In other words, policy makers and users can outsource the responsibility of how these technologies and platforms turn out in practice to those who create and maintain them.

What we do, is creating an artificial divide between the all-knowing technology and the unknowing user.

This allows us to blame “them” without taking any responsibility ourselves, as users of AI-driven technologies in practice. It also means that we overlook that, many times, these technologies aren’t as advanced as we initially believe they are. We overlook that, very often, there are many human actions required before we even come to the user stage. Think, for example, of Facebook’s “data curators”, who have to clean the data before it can even be used to train Facebook’s algorithms. Or “data translators”, who make AI-outputs usable in practice and are now increasingly used in many organizations.

Are there any policymakers who considered to include all the human activities necessary to make AI-driven technologies work?

As long as we, as users, stick to the belief that we cannot understand the increasingly advanced technologies, we will contribute to the obscurity of these technologies and the increasing power they might have over us. We won’t be able to see the human actions behind these technologies, we will believe that what is produced will go “beyond” us, and the rules and regulations we create won’t sufficiently protect us.

We, as users, as policymakers, as those in any way involved with the use of AI, are not legitimated to remain inactive because we do not understand the technology.

We need to educate ourselves about what AI really means.

Of course we don’t have to become computer scientists ourselves, but we do have to get rid of our “easy way out”. We need to make sure we understand what AI can and cannot do, to be able to understand what it requires to use it in practice. We need to stop hiding and start trying to understand, as best as we can. Only then will we see that we are all responsible. Only then will we be able to act sufficiently. And only then will we able to change the path that we have chosen collectively.

--

--

Lauren Waardenburg
KIN Research

Assistant Professor at IESEG School of Management | Research on AI and the future of work | Occasionally writing about fieldwork