How do we make sure applications of artificial intelligence (AI) are not only socially responsible, but also help us become better stewards of our living planet?
The intensive fires in the Amazon, the rapid melting of glaciers and ice sheets, and continued loss of biodiversity all illustrate that our planet is changing at a dangerous pace. At the same time, artificial intelligence is likely to change the way we perceive and respond to social and environmental changes. Machine learning and deep learning methods are already now being applied within a number of research fields related to climate change and environmental monitoring, and investments into applications of these technologies in agriculture, forestry and the extraction of marine resources seem to be increasingly rapidly.
But how do we make sure AI is not only responsible in terms of transparency and accountability, but also to our living planet?
In 2015, a number of my colleagues and myself tried to summarize a number of principles to help guide the development of machine intelligence in the “Biosphere Code”. Yet it is clear that the rapid development and diffusion of data mining and machine intelligence in combination with rapid progresses in sensor technology and robotics, create both new opportunities and challenges.
A number of intriguing on-the-ground applications of AI showcased in 2018 by USAID, as well as advances in deep learning for Earth system modeling, applications of machine learning to help build urban resilience, the potential to use AI to boost mobile technologies that help communities to reduce their vulnerability to extreme weather events, as well as automated ecological monitoring, all show the massive potential for AI to promote sustainability.
While nascent in terms of both scale and impact, early applications of what I would like to call “planetary responsible AI” should be viewed as examples of technological “niche-innovations” (Geels and colleagues, 2017) with the potential to rapidly upscale, and impact on ecological systems and institutions in multiple geographies.
We should stay alert to potential social and environmental risks as these technologies start to diffuse into sectors of critical importance for both people and the planet: agriculture, forestry and the extraction of marine resources just to mention a few.
There is a rapidly growing literature and public debate about the socio-economic, political and ethical challenges created by early applications of AI and automation, including issues like potential biases, lack of interpretability in algorithmic decision-making, privacy concerns, and the potential loss of job opportunities. If unrestrained, the AI revolution may very well amplify social vulnerability, create new systemic risks, and help accelerate climate and environmental breakdown.
If unrestrained, the AI revolution may very well amplify social vulnerability, create new systemic risks, and help accelerate climate and environmental breakdown.
Yet the potential for AI to become planetary responsible is there, if provided with the proper framing. These include:
- Addresses a clear challenge: that is, planetary responsible AI is possible when its intended use is to address specific human-environmental challenges. Examples include disaster early warning and response, tackling illegal extraction of natural resources like fisheries and forestry; combatting hate speech online (as exemplified in this project by UN Global Pulse), or to help ‘amplify the voices of the forgotten’ as Meena Palaiappan, CEO of Atma Connect so eloquently put it.
- Is embedded in local knowledge: that is, combines the strengths of data driven analysis with the ‘co-production of knowledge’ (see Tengö and colleagues, 2014) to secure a deep social and ecological understanding of the system of interest, such as a local farm area as explored by Daniel Jimenez and colleagues at CGIAR Big Data in Agriculture.
- Builds unexpected alliances: that is, brings together communities that normally don’t work together, such as experts on wildlife crime and computer scientists to automate the detection of illegal online trade with wildlife, as explored by Jennifer Jacquet at New York University
- Accelerates experimentation and learning; that is, acknowledges that innovation is destined to sometimes fail, and that these failures are an opportunity for improvement and learning (as explored by Jon Simonsson and colleagues at the Swedish Committee for Technological Development and Ethics, KOMET).
- Is based on principles of responsible use; e.g. creates engagement, acknowledges diversity, aims to be open source, builds on human centric design, is aware of distributional risks and biases, just to mention a few.
Rapid advances in artificial intelligence create new opportunities to the sustainability sciences, and for on-the-ground applications that help build resilience. Some might even argue that the attainment of the Sustainable Development Goals requires civil society, the public and private sector and academia to fully explore the innovation power embedded in these technologies. Let’s make sure these powers are used in a planetary responsible way.
The following blogpost builds on the discussions hosted by the Consulate General of Sweden in New York on October 15th, 2019, including representatives from U.S. and Swedish academia, Swedish government, Google, Ericsson Research, USAID and the UN agencies UNDP and UN Global Pulse. For more information, see aipeopleplanet.earth