AI Safety and The Climate Crisis

Forgetting the Greatest Risk

Alex Moltzau
The Startup

--

Which large or small company working with artificial intelligence that you know of actively talk about the climate crisis and their role?

After more than fifty days of writing about AI every day it seems a rare sight to find companies who talk about the climate crisis or has a plan regarding how they are going to contribute to the issue at hand.

Why do I talk of AI safety in this context? Well, safety is about risk and if we extrapolate the largest risk currently facing humanity is the ever present discussion of the sixth extinction event: with the possible dreary demise of humanity.

This is of course a bit of a dampener amongst Neuralink hype or the race towards artificial general intelligence, at least in my experience. Recently I was attending a panel debate about autonomous weapons I raised my hand into the air posing the question of excercabted carbon emissions due to the aggressive expansion of AI applications and blockchain technologies in defence.

My question was not even deemed worthy of an answer. This could perhaps be due to the manner in which it was asked, however it still perplexed me as I walked out the room. How can it be so forgotten? Engineering teams talk of unintended consequences, but it usually lends itself to the scale of a Roomba vacuum cleaner driven by AI techniques. What if it drives the wrong way? Not to say that this is unimportant of course, it is rather lower in order of priority.

Google and the Climate Crisis

Then again it does of course have to be brought down to scale. Changing towards a society that is more fair is not a contemporary pursuit. Indeed it seems people have pursued ‘good’ as opposed to ‘evil’ for some time. “Don’t be evil” was and is a Google slogan, the main deity in the church of technology. They have pushed onwards for making information available and useful.

If we think about AI and the way these techniques are used Google is one of the absolute largest entities and a frontrunrunner. I wrote an article on why previously, however in short the reason is due to their pioneering use of large scale analysis of text and image. Their research team is also doing some interesting research on semi-supervised learning and Federated Learning. Google could be said to be one of the companies operating with ML techniques on an incredibly large scale and they are investing heavily in AI research as well as acquiring companies working in this field.

Google has a page on sustainability and their focus lies heavily on data centres.

For more than a decade, we’ve worked to make Google data centers some of the most efficient in the world by designing, building, and operating each one to maximize efficient use of energy, water, and materials, improving their environmental performance even as demand for our products has risen.

AI Safety for Good

In the recent months research have been released on the energy requirements of training algorithms and it is considerable.

Even more recent in June 2019 there was an initiative that speaks to this train of thought called Climatechange.ai with some famous voices in the field of AI. They released a paper called Tackling Climate change with machine learning on arXiv.

Karen Hao writer in MIT Technology Review additionally made a post on 10 ways of how AI could help fight climate change. Another exception was the recent AI for Good conference.

It is not at all as if the community in the field of AI are avoiding to take actions or discussing how to address the issues. Yet it seems to me this concern has to be built into more processes and mission statements as well as strategies of companies. Perhaps even going to the extent of making it a part of everyday operations.

I need to examine more of the companies and actors in the field of AI and AI Safety in particular before I say anything definitive. Yet I can allow myself to speak on the impression that I am under.

The impression I currently hold, and that may hopefully change, is that companies working within the field of AI does not put the climate crisis front and center. The concerns of companies are projected into a future that seem brighter with the promise of applications being built.

I would argue the greatest risk in AI Safety is companies making products ignoring or partially forgetting the climate crisis unfolding.

Thank you for reading.

This is day 55 of #500daysofAI.

I write one new article every day on the topic of artificial intelligence.

--

--

Alex Moltzau
The Startup

Policy Officer at the European AI Office in the European Commission. This is a personal Blog and not the views of the European Commission.