Algorithmically Embodied Emissions

By reinforcing high-carbon practices, algorithmic information systems contribute to climate change.

Jutta Haider
Data & Society: Points
6 min readFeb 28, 2023

--

By Jutta Haider and Malte Rödl

An open laptop standing on a moss-covered tree stump in the forest. The display shows a Google search results page with images of forests.
Photo: Shutterstock, edited by Anni Hoffrén.

In its 2022 Environmental Report, Google (as in all its previous environmental reports) describes the company’s positive impact on the environment and all the good it does for the climate. Many platform companies — including Amazon, Facebook/Meta, Microsoft, and Twitter — publish similar reports, all declaring that their operations are green, sustainable and carbon neutral or even carbon negative. These reports focus strongly on what are known as operational emissions, which in the case of these businesses refers largely to the emissions resulting from running servers or heating or cooling office buildings. Those operational emissions can be mitigated through various offset programs or, as Google does, they can be reduced by generating energy through wind and solar plants. But these emissions are only part of those a company or a product generates. Emissions also occur when equipment is produced and when it’s not used any longer. This includes processes like mining, manufacturing, transportation, repair, or recycling — in carbon accounting, those emissions are called embodied emissions, and they are much more difficult to measure (and of course, to reduce). Yet it is essential to include them in sustainability accounting, as Google and Microsoft, for example, are beginning to do.

In some sustainability reports, such as Google’s, platform businesses give themselves credit for their ability to “empower users” by enabling them to make more sustainable choices. Yet these reports generally leave out the impact that their algorithms’ ordinary decisions (could) have in terms of emissions: namely, the products and decisions their intended functioning steers users toward. And there is no term to describe how information that is selected and presented through algorithmic curation can lead to environmental harm.

Therefore, we propose the concept of algorithmically embodied emissions to describe how the various everyday decisions facilitated by algorithmic information systems — including commercial search engines, social media or recommender systems — potentially contribute to climate change and environmental degradation. In this framework, the harm done to the environment and future generations can be viewed as a second-order algorithmic harm — one that occurs when facilitated algorithmic decisions are acted upon.

Algorithmic Assumptions

Say you want to look up information about “summer clothes.” You enter that query into a search engine like Google Search. In return you get a list of stores that sell summer clothes, online or in your neighbourhood, and pictures of fashion models showing off the clothes for sale. It’s what we expect: If you search for a thing, you will usually get results that show you where to buy it and what it costs. The autosuggest feature reinforces this, as does the “people also ask” feature. This is no accident: Rules, encoded in algorithms, are executed in order to make results appear in this way. The search engine interprets the search terms semantically; in our example Google seems to be instructed to interpret it as meaning: the searcher wants to buy new summer clothes!

Algorithmically embodied emissions are emissions potentially contained in the content that algorithmic information systems suggest as their default option.

But there are lots of other possible interpretations of the query. Maybe the person wants to find out what were considered to be summer clothes in a particular historical period; maybe they want to find out which colors in their already full wardrobe are best to wear this year; maybe they are interested in fabrics that have cooling effects in warmer temperatures. Or maybe they are actually interested in obtaining summer clothes, but only from organic or Fair Trade certified fabrics, or from a second-hand shop. They may also want to swap or sew their own summer clothes. A reader familiar with Google Search knows that to get to those more targeted search results, a different query is needed. That query would need to explicitly establish that the person searching has an interest that is different from buying new clothes.

A screenshot showing the results page for the Google Search “summer clothes” juxtaposed with the response by ChatGPT for the same prompt.
This screenshot of a Google Search results page juxtaposed with the response by ChatGPT illustrates how differently these two algorithmic information systems interpret the same query/prompt.

This is precisely the point we are trying to make in introducing the concept of algorithmically embodied emissions: Algorithms make semantic assumptions, and these assumptions are based on values and ideologies. They normalize what people think of when they type “summer clothes” or any other query into a search engine — because they have internalised and anticipate the algorithms’ interpretation. More importantly, algorithms do not (and probably cannot) make these assumptions explicit. Thus, over time, a societally homogeneous understanding of seemingly innocuous things like “summer clothes” is consolidated irrespective of advertisements or the actual rules that are ranking the search results. In the case of activities or things, this is often driven by a consumerist ethos that reinforces high-carbon practices (and is presumably also related to other practices that often result in commercial operations being ranked higher, including update frequency or extensive search engine optimization). Some other examples: if you submit the names of two big cities to Google or any other search engine, in many cases you will get flight options. If you enter “dinner” you will be presented with a list a restaurants and recipes that take into account all kinds of food related concerns — from diets, to allergies, price, speed or health — but the way they are selected and ranked is clearly without regard to the emissions that each recipe, restaurant visit or take-out might produce.

Whose emissions?

But whose emissions are we talking about: emissions of the person using an algorithmic system or the business making a profit from the system being used? In a video titled “Helping every day be more sustainable with Google,” posted on YouTube in October 2021, senior company officials proudly present a number of features that are being built into their products, including Google Search and Google Maps, to help people make “sustainable choices.” Yet what research has shown to be a more sustainable choice, like buying second-hand, not eating meat, or using land-based public transport, is rarely the default option in general-purpose recommender systems. You might be offered a choice between buying a refrigerator that uses more or less energy, but the same query will not help you learn about other ways to store food safely or encourage you to buy a smaller one. You can see which seat on an airplane supposedly produces more or less CO2, but to find out how to travel by train you would need to enter additional search terms. In other words, to come across alternatives to high-carbon normalcy, any searcher must already have the intention or curiosity to learn about sustainable practices and any user of social media must already be categorized as an “environmentally conscious consumer.”

De-normalizing high-carbon practices

What is missing from recommender systems is the ability (whether by algorithmic or business design) to point out the assumptions that underlie their decisions, and an awareness of their involvement in creating and maintaining normalcy. In order to reduce emissions, normalcy must change. High-carbon practices must be de-normalized. Given the central role of algorithmic information systems in how people obtain information and go about their lives, these systems have a significant role to play in accomplishing this. Highlighting the environmental harm and the emissions embodied in the normative assumptions that many of society’s most used algorithms propel, are an important part of such a transformation.

Jutta Haider is a professor in Information Studies at the Swedish School of Library and Information Science, University of Borås. Malte Rödl is a researcher in Environmental Communication at the Swedish University of Agricultural Sciences. Together, they explore the impacts and implications of algorithms for societal meaning-making on environmental issues.

This blog post is based on a paper (co-authored by Sofie Joosse) presented at Data & Society’s Social Life of Algorithmic Harms workshop, and at the Conceptions of Library and Information Science (CoLIS11) conference as part of MISTRA Environmental Communication. In a paper in Big Data & Society, we discuss this and other cases in the context of search engine-facilitated ignorance (and knowledge) logics related to the climate crisis.

--

--

Jutta Haider
Data & Society: Points

Currently a researcher of networked silences & environmenta| information | Swedish School of Library & Information Science | Mistra Environmental Communication