INNOVATE

The Dark Side of Urban Artificial Intelligence

Addressing the environmental and social impacts of algorithms

Scytl
Published in
5 min readFeb 14, 2024

--

In June 2023, the Barcelona-based think tank CIDOB organized an international seminar on “The dark side of urban artificial intelligence: addressing the environmental and social impact of algorithms.” In a follow-up to this seminar, one of our researchers had the opportunity to contribute to a briefing on these topics.

The briefing revolves around the opportunities and challenges brought by Artificial Intelligence (AI) when used by (urban) governments for the environment and, more broadly, societies. The briefing relies on a distinction between “AI for sustainability” and “sustainable AI”. The former meaning the use of algorithms to contribute to ecologically desirable developments, while the latter focuses on the environmental costs of AI systems. The briefing then uses these two interconnected perspectives to also illustrate the relationship between AI and democracy.

In this post we focus on the latter (whereas a previous post has already explored the link between digital technologies and the environment).

The Social Impact of Algorithms: “AI for democracy” and “democratic AI”

When thinking about AI and democracy in the current context, it is likely that the first things that come to mind are its challenges and risks. Indeed, many algorithms pose shortcomings because they report inaccurate results, unfairly discriminate against vulnerabilized groups, cannot be inspected and/or are untrustworthy. Likewise, algorithms can be weaponized by bad actors, even in spite of their original purpose. Good examples in the case of elections are the challenges of algorithms spreading disinformation during elections, targeting specific voter groups to demobilize them, and even launching cyberattacks.

However, algorithms can also be a force for good. For example, they can make the working of public administrations more effective and efficient. This is what we label “AI for democracy”, and the Atlas of Urban AI by the Global Observatory of Urban Artificial Intelligence provides several successful examples in more than 71 cities all over the world: from citizen communication channels in Barcelona to air quality measurement in Los Angeles, even including a project to map and assess Mexico City’s sidewalks for accessibility.

However, how can the above-mentioned problems be avoided when using AI for the delivery of public services? Here is where “democratic AI” comes into play. Unless the design, development and inspection of publicly-used algorithms (or those with social impacts) are done in line with democratic standards and in compliance with human rights principles, the risks of inaccuracy, discrimination, lack of transparency and untrustworthiness cannot be ruled out. In fact, the current scenario seems to be closer to one of autocratic centralization, where powerful corporations and authoritarian countries are in control of the algorithms.

At the same time, AI for democracy can become a tool for a more “democratic AI”. When used to power existing or novel democratic processes, such as citizen participation initiatives or democratic deliberation, AI can help overcome some of its main problems.

Thanks to the use of technology, it is possible to design and implement initiatives that allow citizens to have a say in public decision-making beyond periodic elections. One example are participation platforms that give citizens a space to suggest initiatives, discuss them, and even assign them a budget. Another example is citizen deliberation, which helps create understanding among different views and perspectives on a complex issue. Some examples of how AI could streamline these initiatives is with, among others, translation and real-time interpretation, organizing and summarizing information, substituting some of the roles of human facilitators, or even by generating new points of potential consensus within groups.

Lessons Learned and Challenges Ahead

Based on the conclusions of the seminar and the briefing, four main findings are identified:

a) Regulation and governance come with their own challenges

It seems obvious that the first step for a democratic and sustainable use of AI is regulation. But how can AI be regulated? There are important difficulties in creating a legal framework for emerging technologies, and in the case of AI the European Union’s AI Act is a good example of it. Before coming up with a normative framework, important questions need to be addressed first: what should we worry about? What should any rules target? And how should they be enforced?

b) Human resources: capacity-building and talent attraction

It is no secret that there is a shortage in AI experts. In the absence of sufficient human capabilities, it is unlikely that an AI initiative will deliver in any of its promises. Therefore, it is of utmost importance that any public administration or organization wanting to use AI satisfies itself that it has teams with the right skills. Until the shortage in expertise is addressed, alliances between different actors to share resources and learn from past experiences may also help bridge this gap.

c) Public procurement is key

Public administrations should always have the last say when deciding on which technologies are used for the delivery of public services. Through procurement processes, they should be able to evaluate the algorithms that they are offered. Three types of information are key in the case of AI: technical transparency (the code), procedural transparency (the algorithm’s purpose and how it reaches its outcomes), and “explainability” (the rules that apply if an algorithm impacts someone personally).

d) Citizen participation and co-creation to enhance diversity

If public administrations can decide which algorithms they use, citizens and the general public should be able to understand how they do so — and even why algorithms are considered necessary in the first place — not to mention their ability to speak out about any concerns or unsatisfactory experiences they have about the use of AI in public services. In this regard, allowing anyone to access and inspect the algorithms being used by public administration is an existing practice in elections that could also be adopted for any use of AI.

If you want to know more, check out the briefing available at CIDOB’s website.

This article was written by Adrià Rodríguez-Pérez, PhD and Public Policy Researcher at Scytl.

--

--

Scytl
EDGE Elections

The global leader in secure online voting and election modernization software solutions. www.scytl.com