Let’s Stop Anthropomorphizing AI And Start Engaging More Broadly

The views in this article reflect the author’s personal opinions.

Cartoon © Arend van Dam.

The fear of apocalypse often leads to a lack of reason and, more sadly, distracts attention from the actual problems of technology today. While I was hopeful for a more thoughtful and deliberative discussion about the societal implications of artificial intelligence, the media’s outcry orchestrated around OpenAI’s GPT-2 research update has brought back the headache I experienced before, among other times in the Fall of 2017 after DeepMind released its impressive AlphaGo Zero results.

Back then, I responded in a Dutch blog post to a naive media discussion around non-human super-intelligence taking over, and whether we should consider endowing this technology with human rights. Since similarly extraterrestrial narratives are thrown around this week, I decided to more-or-less translate my Dutch argument to English, in an effort to propose a more effective path forward for assessing the impact of technology and engaging with the research and media communities.

Unfortunately, in my work as a researcher in control engineering, machine learning and legal implications, I too often come across AI scientists and leaders who have embraced an anthropomorphic view to their life’s work; in simple words, the tendency to impose human characteristics on machines or algorithms. To then also grant it the dark human qualities of greed and destruction.

Hollywood, and the media more broadly, have diligently contributed to a widespread delusion that machines can eventually develop their own feelings and intuition and, out of survival, desire to outwit the human tyrant (think of recent films such as Her (2014), Transcendence (2014), or Ex Machina (2015)).

But, while science fiction deserves its place in society, I would argue that researchers should be more mindful in engaging with the media. A classical example of an irresponsible “expert view” that I have seen multiple times — I won’t point fingers — is extrapolating the decreasing cost and increasing scale of computations possible in modern computers, and comparing these numbers to their human counterparts: the cost of human labor and the “number of computations done in the human brain” (a highly questionable reduction for all human activity). Based on a back-of-the-envelope calculation, one can then jump to the conclusion that all of human activity may be taken over in the next few decades. A super convenient narrative for AI leaders to then plug their work for safe, reliable and human-compatible AI.

In my opinion, this framing (by acclaimed experts and trustworthy media outlets) only contributes to a public seeing people and technology as ‘we’ and/vs ‘them’ instead of intimately entangled, leading to more fear and unfounded distrust. The emphasis on apocalyptic scenarios takes away attention to the immense impact that the technology is having already.

This is not to criticize the OpenAI team directly, as they do seem to focus on some important issues of misuse by potentially malicious actors. However, the fact that the research release was orchestrated with sensation-seeking media instead of in conversation with a broader community of scholars and affected communities did cause considerable upset. And with so much public attention on such an important topic, it does feel inappropriate to end the blog post with a recruitment ad — the last line reading: “And if you’re excited about working on cutting-edge language models (and thinking through their policy implications), we’re hiring.”

But what keeps me most dissatisfied, is not knowing how and by whom the retracted models are interrogated and “policy implications” are determined. In all fairness, the Open AI team mostly consists of technical people. They have been hiring some policy people. However, the website does not have any names or faces. Instead, the names of the billionaires who initiated the institute are listed. Ironically, some of those backers have a track record of spreading dystopian visions of AI, while having stakes in large companies that benefit from deploying the current technology in questionable ways, often not having the safety and dignity of users and affected communities in mind.

Let us first remind ourselves that behind the technology that we create and ‘release’ into the world, there are people and organizations that ‘bake’ their values ​​into an algorithm, making numerous choices and assumptions. Artificial intelligence is not alchemy but a technology like any other, designed through numerous trade-offs, which in sensitive settings should be developed just as carefully as bridges, power stations or healthcare processes.

The potentially disruptive power of natural language processing on our media, and indirectly on our democratic processes, infrastructures and institutions demands that we revisit how responsibility for safety and justice are safeguarded. It also triggers fundamental questions about whether to develop a technology at all.

This irrevocably means integrating social perspectives and policy-relevant domain knowledge within and throughout the development and integration process of new algorithmic efforts, instead of treating implication questions as an afterthought. But most importantly, it should involve and empower the democratically legitimate authorities that can meaningfully develop the rules and regulation that is needed to counter the inevitable misuse of technology.

I know this last point may sound naive, especially in the United States. The recently erected Partnership on AI has 80+ organizations, of which >50% non-profits. But the words “legal” or “regulation” are not named a single time, neither does any public agency show up on the list of organizations. Best practices and policy positions will not suffice. Let’s see what the Ethics Guidelines and Policy Recommendations currently drafted by a high level expert group for the European Commission yield. Hopefully, it succeeds in providing direction on how the General Data Protection Regulation (GDPR) should be complemented to control malicious uses of AI technologies.

In the meantime, for OpenAI to be truly open, and for the broader research community to have a meaningful impact on the actual societal consequences, let’s actively engage with the institutions that are or should be in charge of safeguarding and regulating the use of AI. Let’s motivate research not in terms of general purpose technology but as critical infrastructure. Let’s not be naive about technology being a neutral means, but ground our scientific questions in values that reflect the society that all of us want to live in. But first, let’s question the structures we operate in and how these might prevent us from making any meaningful contribution.