If you don’t build it, they won’t come

CRIEM CIRM
L’Urbanologue | The Urbanologist
6 min readJul 16, 2022

Written by Ana Brandusescu

Regardless of its utility, responsible artificial intelligence (Responsible AI) is a term that is here to stay — a response to technology that’s driven by private sector interests, opaque to many of its users. Declarations, guidelines, and frameworks are being co-created, launched and promoted across academia, industry, governments, and civil society. Responsible AI can be interpreted as accountable and fair algorithms that reduce bias across a range of factors including gender, race, and sexual-orientation, and resonates with the “don’t be evil” mantra. Being responsible, in this case, could simply mean building better code.

Industry reasoning is often “we’re doing nothing evil; we’re just building an algorithm.” Yet building algorithms incidentally makes a lot of money and provides a competitive advantage (Purdy and Daugherty 2017), which seems more likely to be the drive for their creation. This concern came into sharp focus during a conference on AI in Montreal. What I witnessed was a philosophical divide between computer scientists (e.g. working on AI/deep learning), and social scientists, ethicists, bankers, and lawyers, suggested through differences in mentality. The first, an optimistic predisposition on building technical solutions: “You can build AI (you just need the technology).” The second, a skeptical take on AI replicating humans: “You cannot build AI (you will not achieve sentience).” The discussion, which focused on how AI could be built (technically, even ‘ethically’), missed the question of whether AI should be built in the first place.

Colorful spinning wheel
Credit: Slendah

The ethics spin

A large technology consulting firm’s presentation stood out with an ethics framework that thoughtfully describes the AI production process, and includes questions to ask along the way. Yet the framework glosses over decision-making power. More specifically, what the decision making process looks like for each part of the framework. How many times do tech companies decide not to build AI and why?

The “should we build it” debate seems to focus exclusively on banning potentially weaponizable AI, meaning technology that supports lethal autonomous weapons systems (Kaspersen 2016). What is considered weaponizable should be questioned. Yet, facial recognition technology is not problematic enough for this large tech industry player, who chose to develop and test an emotion recognition software. This type of AI is called affective computing (Picard 2003) — it’s also known as artificial emotional intelligence, emotion AI, and emotion recognition. Affective computing benefits largely from scanning millions and millions of faces (aka facial recognition technology). Presented and tested in the e-gaming sector, the software collects and tracks hundreds of data points of your face, and body movements. This is all in the name of predicting your emotions. The software was further marketed as a cure for mental illness, “Wouldn’t it be great if AI could eradicate depression off the planet?” — next level tech solutionism. The presentation ended with Fantasia. While a Disney clip may appear irrelevant in the context of these issues, even the entertainment industry has vested interests in using emotion recognition technology to detect and classify our emotions, also for financial gain (Pringle 2017; Varghese 2019).

Datasets used to train emotion recognition technology are based on stereotypes, which fail to portray the reality of what constitutes someone being sad or happy (Chen 2019). We know that facial recognition technology can inaccurately detect gender and race, and can result in systematic bias against specific groups (i.e. discrimination) — especially for people of colour (Benjamin 2019; Eubanks 2018; Keyes 2018; Noble 2018; Raji and Buolamwini 2019; White 2019). Evidence shows that predictive policing software is discriminatory in its over prediction of the likelihood of black people committing future crimes, and significant under prediction of white people, especially when compared to black people (Larson, Mattu, Kirchner, and Angwin 2016). The risks of the technology being misused are enormous and can lead to state and industry wide surveillance (Wiewiórowski 2019). Technology firms can be culpable. Murgia and Yang (2019) report that US tech companies like Microsoft are collaborating on on facial recognition technology research with China’s National University of Defense Technology under China’s Central Military Commission. As aptly identified by Lee (2019), Taddonio (2019), and Raval (2019), these research collaborations matter, because the AI built can support the detaining of a million Uighur Muslims in internment camps, with millions more under surveillance. It is clear that algorithmic discrimination is not just a technical problem.

Ask hard questions, join collective action

Will anyone ever be sanctioned? This means serious sanctions, not Facebook’s 5 billion USD fine that resulted in increasing their stock shares (Patel 2019). Will anyone ever go to jail? And who will pay? These are the questions that keep me up at night.

So what can we do? We need to ask questions of those who get paid to build AI infrastructure and systems. We need to question the people who finance and manage all of this — where knowledge gets shared, and partnerships are brokered. Responsible AI needs to include regulations and sanctions that are enforced so AI systems and the humans involved in designing, building, and implementing them will be held accountable — and be responsible for the consequences, good and bad.

Those who have access to knowledge, decision-makers, and developers, have unparalleled advantages and influence over the future of these systems. Gatherings that discuss responsible AI are neither affordable nor accessible (e.g. no livestream), allowing industry players to control who participates and shapes the AI discourse. In the meantime, to counter those hoarding power, people need to create their own rooms (McNealy 2019). Power can shift through collective action.

Power structures (and I don’t mean power grids) in AI, and their impact on product design and development require interrogation by outsiders. This means almost everyone who isn’t affiliated with the technology industry. Who needs to question these structures are regulators, social workers, ethnographers, grassroots organisations, local activists working on digital and non-digital projects, to name a few. Collective action also includes unpacking power dynamics in user interactions (Chowdhury 2019). More effort is needed to bridge digital divides in skills and knowledge. Let’s focus on discussing, recognizing, and understanding “AI for bad.” Let’s re-direct our efforts and draw some hard lines on what not to build. This includes working towards erasing the footprint of facial recognition technology (Keyes 2019).This means we need to dismantle the software as well as hardware that makes up facial recognition software. Local governments across the US have started doing so, such as San Francisco’s banning of facial recognition software (Buglewicz 2019). This ban should extend to production and deployment outside of government services.

The “don’t build” aspect of AI, and the need for sanctions comes down to institutional power. Multiple views need to be heard, listened to and examined. This is to understand why others see (potential) problems and decisions made in the AI solutions being developed. In addition, how to upskill the voices and surmount these digital divides should be covered. Here interdisciplinarity and multidisciplinarity are key to collective action. “Collective” cannot just be “we’”— humanity making right or wrong decisions. It’s never that simple. So let’s move beyond band-aid solutions that uphold problematic systems. To change existing systems, institutional power needs to be unpacked. The direction of AI development is not inevitable. Social justice is more about responsible people and less about responsible AI. However, some people will never be responsible, therefore policies, laws, and an enforced regulatory system are needed to prevent them from irresponsible behaviour. Legislation that imposes sanctions needs to be enacted. Given that not all things in AI should be built, how can we have an open and healthy discussion between industry and everyone else impacted by AI?

Thanks to Dr Renée Sieber (@re_sieber) for the insightful comments and edits. Ana Brandusescu (@anabmap) is the Professor of Practice for 2019–2020 at McGill University’s CIRM, where she is researching the political economy of AI.

This post is the sole responsibility of its author. It was originally published on CIRM’s website on November 28, 2019.

--

--

CRIEM CIRM
L’Urbanologue | The Urbanologist

Centre de recherches interdisciplinaires en études montréalaises | Centre for interdisciplinary research on Montreal