What is Sustainable Artificial Intelligence?
Exploring different ways to approach sustainability in the field of artificial intelligence
Both ‘sustainability’ and ‘artificial intelligence’ can be hard concepts to grapple with. I do not believe I can pin down two incredibly complex terms in one article. Rather I think of this more as a short exploration of different ways to define sustainable artificial intelligence (AI). If you have comments or thoughts they would be very much appreciated.
These thoughts come after a discussion on Sustainable AI I moderated on the 21st of May as part of my role at the Norwegian Artificial Intelligence Research Consortium. I also wanted to do some thinking before the Sustainable AI conference the 15th-17th of June that will be hosted at the University of Bonn.
Futures, goals and indicators
Pertaining to sustainable development, and as said in the report Our Common Future also known as the Brundtland Report, was published on October 1987:
“Humanity has the ability to make development sustainable to ensure that it meets the needs of the present without compromising the ability of future generations to meet their own needs. The concept of sustainable development does imply limits — not absolute limits but limitations imposed by the present state of technology and social organization on environmental resources and by the ability of the biosphere to absorb the effects of human activities.”
This is an ever changing broad definition of sustainability due to the focus on ‘present’, ‘future’ and ‘needs’. In this way sustainability in this framework is constantly being redefined and challenged.
These notions were to some extend based on the economic resource-based forecasting in the Limits to Growth report:
“The Limits to Growth (LTG) is a 1972 report on the exponential economic and population growth with a finite supply of resources, studied by computer simulation.”
There had been thinking before this including, but of course not limited to:
- 1662 essay Sylva by John Evelyn (1620–1706) on the management of natural resources (in particular forestry in this case).
- 1713 Hans Carl von Carlowitz (1645–1714) with Sylvicultura economics, (developing the concept of managing forests for sustained yield).
- 1949 A Sand County Almanac by Aldo Leopold (1884–1948) with his land ethic (ecologically-based land ethic that rejects strictly human-centered views of the environment and focuses on the preservation of healthy, self-renewing ecosystems).
- 1962 Silent Spring by Rachel Carson (1907–1964), with the relationship between economic growth and environmental degradation.
- 1966 essay The Economics of the Coming Spaceship Earth by Kenneth E. Boulding (1910–1993) with lines between economic and ecologiccal systems in limited pools of resources.
- 1968 article Tragedy of the Commons by Garrett Hardin (1915–2003) that popularized the term “tragedy of the commons” (open-access resource systems may collapse due to overuse).
As such, although Limits to Growth (1972) and Our Common Future (1987) popularised sustainability there were threads of thoughts that followed these lines previously.
Later convening work in UN-led conferences has played a part in developing a framework to operationalise commitment from nations.
- 1992 Conference on Environment And Development (Earth Summit) with the Rio Declaration on Environment and Development consisted of 27 principles intended to guide countries in future sustainable development. It was signed by over 175 countries.
- 1995 World Summit on Social Development produced a Copenhagen Declaration on Social Development. A resulting 1996 report, “Shaping the 21st Century”, turned some of these commitments into six “International Development Goals” that could be monitored.
These had similar content and form to the eventual Millenium Development Goals (MDGs). The MDGs were established in 2000 with goals for 2015, following the adoption of the United Nations Millennium Declaration. The Millennium Declaration has eight chapters and key objectives, adopted by 189 world leaders during the Millenium Summit 6th to the 8th of September 2000.
In 2016 these MDGs were succeeded by the UN Sustainable Development Goals (SDGs).
You have likely seen the colours and numbers around as they are visual and often seen in presentations by various businesses and governments:
THE 17 GOALS | Sustainable Development
The 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015, provides a shared…
It is important to note that these 17 goals also have indicators detailing progress towards each target.
“The global indicator framework includes 231 unique indicators. Please note that the total number of indicators listed in the global indicator framework of SDG indicators is 247.”
An attempt at displaying the available data can be seen in an online SDG tracker (made by Global Change Data Lab, a registered charity in England and Wales) and it is listed on the official website of the United Nations:
Global indicator framework for the Sustainable Development Goals and targets of the 2030 Agenda for Sustainable…
Within these indicators Internet is for example mentioned four times.
Machine learning, artificial intelligence, automation, and robotics receive no mention.
- Should these concepts be included?
- If so, why should they (or AI alone) be included?
I do not claim AI is as important as the Internet, although I do believe that to some extent AI can have a horizontal influence across various sectors and areas of society. Especially with recent examples such as the Google’s LaMDA launched this May 2021, an AI system for language integrated across their search portal, voice assistant, and workplace.
That being said:
- Notions of resource use and social goals more broadly are relevant for the field of AI.
- Further risks or possibilities for sustainability could be considered in large or small AI systems.
There are of course many terms that more broadly do not feature in the goals or the indicators, but these goals are still relevant for the conceptual and operational aspects involved in developing and applying AI.
Sustainable AI and the sustainability of AI
One example could be by Aimee Van Wynsberghe, one of the hosts of the conference on Sustainable AI, in her article Sustainable AI: AI for sustainability and the sustainability of AI:
“I propose a definition of Sustainable AI; Sustainable AI is a movement to foster change in the entire lifecycle of AI products (i.e. idea generation, training, re-tuning, implementation, governance) towards greater ecological integrity and social justice.”
Wynsberghe also argues:
“Sustainability of AI is focused on sustainable data sources, power supplies, and infrastructures as a way of measuring and reducing the carbon footprint from training and/or tuning an algorithm. Addressing these aspects gets to the heart of ensuring the sustainability of AI for the environment.”
In her article she splits this into the sustainability of the system and the application of AI for more sustainable purposes:
“In short, the AI which is being proposed to power our society cannot, through its development and use, make our society unsustainable”
Wynsberghe argues for three actions we have to take, I have shortened these slightly, but they can be read in full within her article:
- “To do this, first, AI must be conceptualized as a social experiment conducted on society… it is then imperative that ethical safeguards are put in place to protect people and planet.”
- “…we need sustainable AI taskforces in governments who are actively engaged in seeking out expert opinions of the environmental impact of AI. From this, appropriate policy to reduce emissions and energy usage can be put into effect.”
- “…a ‘proportionality framework’ to assess whether training or tuning of an AI model for a particular task is proportional to the carbon footprint, and general environmental impact, of that training and/or tuning.”
This approach from Wynsberghe construct a duality of sustainable AI systems and and a thoughtful purpose in the application of AI. Both are important, and these can be useful in building a way to approach sustainable AI as a concept.
As a simple two-point heuristic for a complex issue sustainable AI is:
- The sustainability of the AI system itself throughout its lifecycle.
- The area of application where AI is being used and how it contributes to the broader agenda of sustainability.
There are other ways to approach sustainability.
Power and inequalities
It is important to consider power and inequalities as they configure to some extent within the SDGs. These topics are often forgotten or ignored when artificial intelligence is discussed together with sustainability (although ‘bias’ is often mentioned).
Sustainable Development Goal number 10: reduced inequalities, what part does AI applications play in this regard?
I consider Weapons of Math Destruction by Cathy O’Brien to feature in this discussion, and it sparked a wide range of questions.
The recent film Coded Bias alongside the research and advocacy by Joy Buolamwini, Timnit Gebru, Deb Raji, and Tawana Petty on the inequalities (in the form of bias) in AI systems, particularly facial recognition is important.
Coded Bias | Netflix
This documentary investigates the bias in algorithms after M.I.T. Media Lab researcher Joy Buolamwini uncovered flaws…
I believe personally that another interesting further discussion of this at length can be found in the book The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Because there are both large questions of the resource system built around artificial intelligence and the delivery of services in various political contexts.
This is also about labour and minerals within planetary boundaries.
Power can to some extent create frameworks for what actions that we take. This is not new, yet AI has become a large part of framing decision-making processes with large populations/citizens/users depending on who you ask.
Another aspect is efficiency of language models and large models trained on enormous data is the challenging computational needs and potential impacts on society. Companies, NGOs and governments attempt to handle this through employing various AI ethics teams. Yet as can be demonstrated by the firing of the two co-leads of the AI ethics team in Google Timnit Gebru and Margaret Mitchell before the launch of a new large language model, this is by no means an easy relationship.
Google made AI language the centerpiece of I/O while ignoring its troubled past at the company
At its I/O developer conference, Google outlined an ambitious future for its tools based on AI language models.
AI ethics teams can often have a narrow remit and sustainability is not necessarily discussed within these contexts. Activities can vary from large aggregated philosophical notions of varying morality or contesting benchmarks in machine learning datasets. I believe part of what AI ethics is can be seen as a way to address difficult ethical issues in the application of services or products. At times it seems that codes of conducts or principles are made as a way to argue for moral supervision in a company.
AI ethics can be either/or a technical exercise performed with developers on current delivery of applied AI or a proactive scenario-based thinking exercise that can help map issues in the application of AI.
It can also be important to challenge inferences in AI (decisions formed based on data or frameworks). Decisions are often extrapolated so that the application to an unknown situation is made by assuming that existing trends or data will continue or similar methods be applicable to a given situated.
Extrapolating may be difficult for social interactions, although not impossible, and therein lies a challenge more broadly for society (political influence or propaganda + AI being one prominent example).
Data can still be important to see trends, and we can conclude that action needs to be taken for increased sustainability. One area often discussed that is needed to sustain life on planet earth is to address the urgent climate crisis.
Climate crisis and computational efficiency
What can often be heard is carbon emissions and the trade-off mentioned by Strubell, Ganesh and McCallum. It posed a pervasive question that is being repeated in the AI community when discussions of climate arise: how much carbon does training a model emit?
There are arguments that AI can help in tackling the climate crisis. A community has over the last appeared in the field of AI focused on this question in particular:
In this sense it is a question of the trade-offs in application within the field of AI as mentioned by Wynsberghe, both the lifecycle system considerations and the applications in the field of AI.
If we think back to sustainable forest management I have previously thought about some examples and how AI could be useful:
Artificial Intelligence and forest management
How can we make the world more green and lush with the help of AI?
One attempt to address this is by building models differently, especially with more biologically-inspired computational systems. One example in Norway is the research group NordSTAR:
Artificial Intelligence (AI) is a field in computer science that attempts to reproduce traits of human intelligence in…
A more prominent example could be the startup Another Brain focused on what they call ‘organic AI’ founded by Bruno Maisonnier who previously founded Aldebaran Robotics acquired by SoftBank Robotics in 2012:
A word from our founder, Bruno Maisonnier AnotherBrain has created a new kind of artificial intelligence, called…
As mentioned on their website:
“AnotherBrain has created a new kind of artificial intelligence, called Organic AI, very close to the functioning of the human brain and much more powerful than existing AI technologies. A new generation of AI to widen limits of possible and applications. Organic AI is self-learning, does not require big data for training, is very frugal in energy and therefore truly human-friendly.”
In this sense both the ‘frugality’ of the system and the application to address the climate crisis are necessary considerations. Additionally, it must be stressed that human-friendly does not necessarily mean planet-friendly.
Interdisciplinary collaboration and education
Complex systems requires rethinking how education is delivered and how we collaborate in society. This is also the case for artificial intelligence.
Rethinking systems of AI and AI applications can mean broadly thinking about humanities and society. An example of funding related to this is the WASP-HS programme in Sweden:
WASP-HS - The Wallenberg AI, Autonomous Systems and Software Program - Humanities and Society
Healthcare and trustworthy AI, ethical issues, cyber security, legislation, empowering people, human-centered AI…
It is doubtful that AI engineers have the time or resources to dive into the historical frameworks of a given context where their systems are applied nor the cultural peculiarities — or persisting systemic inequalities. That being said AI engineers can have an interest or engagement towards these topics, but approaching sustainability in society and nature will require both different educational backgrounds and diverse participation from different groups of people.
If you quantify actions in a society does it mean you can change it for the better?
This is about information and what we do with it as humans. However, it is also about social and ecological change.
We can amass almost unlimited wealth (if measured in numbers), to attain what we desire so to speak. Yet these large quantities of information may not automatically lead to decisions we desire for a sustainable future.
The purpose(s) for why systems are built in the field of AI are built relates to the context of different communities. Since that is the case it also relates to citizens and governance for populations in various areas.
Governance of AI for sustainability
Even though private companies are mentioned very often when AI is discussed states play an increasingly prominent role in this. Then again, one can indeed say they have since the early development of AI (with military spending and funding research). The interplay between various parts of society (also mentioned in SDG16) is worth considering, and peace should not be forgotten when we discuss AI. Existential risk is one area that is being explored in discussion of AI. This does not have to be a Terminator or Skynet-like situation, it could simply be an advanced AI project that has unintended consequences on a large scale.
Be it nongovernmental organisations, authoritarian regimes, citizens, informality, democracy and so on. Governing within the field of AI is a matter that pertains to the state:
- How does a state invest in AI?
- How does a region invest in AI?
- Who manages AI in the state?
- What application surfaces are invested in?
- How do states participate in international forums for AI?
- How does it affect citizens in different countries?
These questions are not easily answered, yet I believe they are highly relevant to the sustainability of artificial intelligence.
What is sustained?
Sustainability is often viewed as an equal balancing act with set goals, but it involves negotiations of a large extent of relationships in our shared ecosystem. I do not believe in perfect equilibrium of opportunities, however we should strive for sustainability regardless.
These are some of my notes and thoughts on the topic of sustainable AI.
What do you think? How does sustainability and artificial intelligence relate to each other, and what actions can be taken for increased sustainability in the field of AI?
If you have comments or thoughts they would be very much appreciated. Feel free to comment on this article or tweet me to tell me what you think.
This is #1000daysofAI and you are reading article 504. I am writing one new article about or related to artificial intelligence for 1000 days. The first 500 days I wrote an article every day, and now from 500 to 1000 I write at a different pace.