Algorithms

berenice gagne
Anthropocene 2050
Published in
9 min readApr 13, 2021

Listen to this chronicle or read it!

Evanescent and yet ubiquitous, artificial intelligence interferes in urban spaces — promising ecological cities where security competes with well-being — but it also interferes in political processes, in professional lives and even in our personal lives.

Find all the references at the end of the post.

Artificial intelligence! For me, who was born in the 1970s, AI awakens a whole literary or cinematographic imagination. An imagination colonized by StarWars droids R2-D2 and C-3PO, inhabited by robots with more or less human forms, more or less sexualized, benevolent or in rebellion against their human creators and often in love with freedom. Over time the representation of AI has become dematerialized, evanescent and yet ubiquitous. Not a week goes by without a headline about its achievements — for instance, DeepMind, Google AI, can predict how proteins fold into 3D shapes, which will help with the discovery of new drugs. Not a week goes by, either, without a headline about its risks, real or feared — for instance, the growing use of autonomous killing machines in armies, be they drones, robots or biomimetic weapons.

The core of the smart city is not technology: it’s people!

What about the city? Connected city, digital city or smart city promise a better quality of life and guarantee security in a sustainable urban environment. All this thanks to artificial intelligence and a wide range of mobile applications and sensors scattered in a fully interconnected urban space that regulates the different urban functions in real time. Far from the promised dematerialization, these integrated services require a lot of technology and infrastructure. Antoine Courmont, scientific director of the “Cities and Digital” chair at Sciences Po, notes that the relevance of urban digital projects is now being questioned in terms of their environmental footprint. Citizen mobilization is thus questioning the purpose of digital development projects, but also, more widely, major land use planning projects.

Indeed, the core of the smart city is not technology: it’s people! To be truly smart, the smart city must involve citizens. Teresa Scassa, Canada Research Chair in Information Law and Policy, believes that the digital city needs a public governance of data that involves the population. Hervé Rivano, head of the Agora team on wireless networks for digital cities, at Insa Lyon, stresses the need to train citizens to understand the societal challenges of urban digital projects and to participate in the debate in a democratic way. He refers to the concept of the “learning city”: the general idea is to identify and develop learning practices that can be implemented by municipalities in order to promote the development of residents’ skills throughout their lives. Youth clubs, community education, fab labs: all forms of informal learning are useful to reduce “illectronism”, or digital illiteracy. Unless there is a general digital literacy, the debate risks being confiscated by a few experts. In Technology Everywhere, Democracy Nowhere, Yaël Benayoun and Irénée Régnauld suggest putting democratic debate and citizen demands at the heart of technological choices. They remind us that the digitization of the world was not written thanks to collective discussion; it is still possible to sort out the technologies we want and those we don’t want, without being assigned to an Amish community.

The age of surveillance capitalism or deliberative democracy

The stakes are high: while Antoine Courmont notes a decline in the smart city market, he observes the rise of the “Safe City” market, where the balance between freedom and security is in tension. The use of technology to control land and people is growing all over the world. When it comes to digital and freedom, China is regularly used as a pushback, especially with its complex “social credit” system that assesses whether a person or a company is a good citizen by relying on big data analytics technologies and mass surveillance to collect personal data. However, it is in our own Western societies that sociologist Shoshana Zuboff analyzes the challenges of the digital world to freedom. In The Age of Surveillance Capitalism, she presents the results of a 20-year investigation into the power of intelligent machines and the threat that this power represents for human freedom. She reveals the very human, not technological, choices that have led to the establishment of what she calls “surveillance capitalism,” the model that monetizes our personal and behavioral data and produces algorithms that are supposed to predict and influence our choices. The GAFAMs, the famous global tech companies — Google, Apple, Facebook, Amazon and Microsoft — have thus acquired a quasi-monopoly position and a power that worries the political world: they control the circulation of information, that is to say the wheels of political mobilization. In Technofeodalism. Critique of the Digital Economy, the economist Cédric Durand considers the digitalization of the world “a great regression”. The return of monopolies, our dependence on platforms and the blurring of the boundaries between the economic and the political, according to him, reactivate a certain idea of feudalism.

In this context, various parties are calling for a decentralization of the Web: they emphasize interoperability — the ability to exchange information between different platforms — as the key to opening up less unfair competition between the giant tech companies and potential alternatives. Taiwan has chosen a different path: its Minister of Digital Affairs, Audrey Tang, herself a programmer and former “hacker citizen”, has launched an experiment with platforms for citizen deliberation and collaborative lawmaking for a deliberative democracy. Since 2014, the “vTaiwan” (virtual Taiwan) platform, managed by a community of hackers, allows to send proposals on issues that affect digital and new technologies to the government. A weekly hackathon then selects the proposals that will be the subject of an online citizen forum. Given the limited number of participants, the Minister of Digital Affairs has set up a second citizen participation mechanism, “Join”, which covers all subjects of national importance to create a better involvement of citizens in political decision-making.

Who’s to decide?!

Artificial intelligence may indeed deprive us of our ability to decide. The research organization AlgorithmWatch studies algorithmic decision-making processes in the social field, whether they are used to predict or prescribe human action, or to make decisions automatically. Its very rich report “Automating Society Report 2020” highlights the growing influence of automatic decision-making tools in Europe, in all fields of activity (from social assistance to education, from health to finance and justice), and particularly in access to services and citizens’ rights. It reveals that the Covid-19 pandemic has seen the growing implementation of automated databased decision systems to protect public health. In Australia, the Commonwealth Scientific and Industrial Research Organisation is currently testing a “chatbot”, i.e. a conversation application, intended to assist with a difficult medical decision: Edna (that’s its name for electronic-DNA) interacts with patients wondering whether or not they should undergo genetic tests that could identify health problems involving them and their families.

Bias, the original sin of AI

The use of technologies based on artificial intelligence to operate in society is subject to a well-identified hazard: biases, these systematic distortions, often revealing prejudices, which alter the results. One remembers, for instance, the unfortunate adventure of the chatbot launched by Microsoft in 2016 to study language comprehension by chatting with teenagers on social media. It took less than 24 hours for the chatbot to post sexist, racist, negationist and conspiracy tweets — pushed by Internet users testing its limits and fed by the content of social media and by the expressions of its coders. In 2018, computer scientists Timnit Gebru, then with Microsoft Research, and Joy Buolamwini of the Massachusetts Institute of Technology, found that leading facial recognition software was far less effective at identifying women and people of color (with African, Asian, or Native American features) than it was at classifying white male faces. In the documentary Coded Bias, currently showing on a popular streaming platform starting with an N, Joy Buolamwini, whose skin is said to be black, shares how she had to wear a white mask to simply be detected by an interactive robot she worked with at the MIT Media Lab. “Bias is the original sin of artificial intelligence”, said Dr. Ayanna Howard, a roboticist at Georgia Tech. In response to these repeated findings, during the summer of 2020, hundreds of AI and robotics researchers signed op-eds pledging to change the way their disciplines work. Tech companies have also created ethics research groups: this is how Google recruited computer scientist and activist Timnit Gebru… before firing her without explanation at the beginning of December 2020, while Google was charged with surveillance of its activist employees.

Drudge of the click: the hidden bodies

In the magazine Le Grand Continent, the jurists Antonio Aloisi and Valerio De Stefano published a long reflection on work in the age of algorithms and predictions of an imminent ‘end of human work’ — replaced by machines and software. According to them, technology can be regulated, and progress can go hand in hand with respect for and consolidation of workers’ rights. Like the sociologist Antonio Casilli in Waiting for the Robots. Investigation of click work, they lift the curtain on the backstage of digital work, far from the myth of artificial intelligence liberating the human race from drudgery. They reveal the exploitation of cohorts of click drudges by platforms that reshape human work and make it more insecure. The “clickers” have to trigger processes, correct errors, code, repair bicycles, scooters and cars for carpooling, or even cook a hamburger, store and deliver packages, and arrange the aisles of department stores. This hidden materiality of the digital world puts the human body back at the center, a body that seems weak, vulnerable, tired in a universe where the threat of the robot is floating.

It is perhaps this physical threat that offers a clue to the first mediated roboticide in human history, that of the hitchhiker robot HitchBot, re-enacted by director Linda Blanchet in her play Killing Robots. HitchBot, a vaguely anthropomorphic robot the size of a 6 year old, was created in Canada to study human-machine interactions. In 2014, it hitchhiked across Canada, in 2015, across Germany and then across the United States. 15 days later, it was found decapitated, dismembered and vandalized in Philadelphia. Was it ritual sabotage by Luddites — those anti-tech “machine-breaking” activists –, the revenge killing of click workers, or simply the theft of materials? The mystery remains, but the violence of this founding act says a lot about the complex relationship we have with robots, these sublimated doubles that we have created. The question arises: should robots trust humans?

Music: Hot Butter, « Popcorn » (1972, a cover of Gershon Kingsley, 1968), the first electronic-based international hit song.

Just for fun and as a gift to those who have scrolled this far: Tears in rain monologue by Replicant Roy Batty in Ridley Scott’s Blade Runner (1982).

Sources

ALGORITHMWATCH, Automating Society Report 2020.

Antonio ALOISI, Valerio DE STEFANO, « Les algorithmes pour contremaîtres » (Le Grand Continent, 08/12/2020).

Yaël BENAYOUN, Irénée REGNAULD, Technologies partout, démocratie nulle part. Plaidoyer pour que les choix technologiques deviennent l’affaire de tous (FYP, 2020).

Fabien BENOIT, « Et si les choix technologiques devenaient l’affaire de tous ? » (Usbek & Rica, 09/10/2020).

David BERREBY, “Can We Make Our Robots Less Biased Than We Are?” (The New York Times, 22/11/2020).

Antonio A. CASILLI, En attendant les robots. Enquête sur le travail du clic (Seuil, 2019).

Davide CASTELVECCHI, “Is facial recognition too biased to be let loose?” (nature, 18/11/2020).

Antoine COURMONT, Patrick LE GALES (dir.), Gouverner la ville numérique (PUF, 2019).

Brice COUTURIER, « Les GAFAM, une menace pour la démocratie ? » (France culture, 25/11/2020).

Cédric DURAND, Technoféodalisme. Critique de l’économie numérique (La Découverte, 2020).

Elle HUNT, “Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter” (The Guardian, 24/03/2016).

David IRELAND, Clara GAFF, Dana KAI BRADFORD, “Introducing Edna: the chatbot trained to help patients make a difficult medical decision” (The Conversation, 08/12/2020).

Shalini KANTAYYA, Coded Bias (2020).

Joanna KAVENNA, “Shoshana Zuboff: ‘Surveillance capitalism is an assault on human autonomy’” (The Guardian, 04/10/2019).

Claire LEGROS, « La ville numérique a besoin d’une gouvernance publique des données, qui implique la population » (Le Monde, 13/10/2020).

Claire LEGROS, « Les mouvements citoyens contribuent à politiser le débat sur la ville numérique » (Le Monde, 09/10/2020).

Le Monde avec AFP, « Le licenciement polémique de Timnit Gebru, qui travaillait chez Google sur les questions d’éthique liées à l’IA » (Le Monde, 05/12/2020).

LIEU UNIQUE, « J’ai enquêté deux ans sur la mort du robot auto-stoppeur » (Usbek & Rica, 05/11/2019).

Frank PASQUALE, “‘Machines set loose to slaughter’: the dangerous rise of military AI” (The Guardian, 15/10/2020).

Cécile PELTIER, « Civic tech : Taïwan, laboratoire de délibération citoyenne » (Le Monde, 09/10/2020).

Hervé RIVANO, « Une ville intelligente et démocratique donne forcément une place centrale à la formation » (Le Monde, 10/10/2020).

Ian SAMPLE, “DeepMind AI cracks 50-year-old problem of protein folding” (The Guardian, 30/11/2020).

Pierre SEL, « L’utilisation par la Chine du système de crédit social pour gérer l’épidémie de Covid-19 » (Fondation pour la Recherche Stratégique, 28/04/2020).

Samuele VINANZI, Massimiliano PATACCHIOLA, Antonio CHELLA, Angelo CANGELOSI, “Would a robot trust you? Developmental robotics model of trust and theory of mind” (Philosophical Transactions of the Royal Society B, 11/03/2019).

Shoshana ZUBOFF, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Profile, 2019).

Jingle music: Muthoni Drummer Queen — Suzie Noma

HitchBoT (2015) © Sven Hoppe, picture-alliance/dpa/AP

--

--

berenice gagne
Anthropocene 2050

🔭Veille & valorisation scientifique - Changement global, habitabilité, Anthropocène, justice sociale et environnementale