Barnas Monteith
10 min readApr 2, 2021

Artificial Intelligence & Climate Change: What you need to know!

AI is officially the best thing since sliced bread. We all know that. It’s a fact that the past few years have seen numerous industries revolutionized because of some new or improved application of visual/numerical/audio/other-sensor pattern recognition, prediction or data generation capabilities of machine learning / neural networks / GANs. It’s unavoidable, even in the mainstream news, and the discoveries seem to be popping up almost everyday. AI has the potential to cure some of humanity’s biggest problems, particularly medical ones; just take a look at the recent AlphaGo creation, Alphafold! So, that’s a great thing, right?

Well, not if you like having a planet to live on.

Maybe I’m exaggerating a bit here. But perhaps Elon Musk really might be on to something, with his upcoming plans to be a multiplanetary species (or maybe abandoning Earth for Mars?). Because at the rate we’re going, we may need another planet soon.

The problem with AI today is that, despite massive improvements in algorithms, models and the efficiency of modern AI systems, the process of making decent AI still requires lots of data training. And data training requires information, time, lots of computer chips (mostly GPU’s and TPU’s these days), and electricity. And, chips + electricity = lots of heat.

We are heading into a new, connected smart world where AI and AIoT is omnipresent, embedded in our phones, our cars, our voice-assistants like Cortana and Alexa — and even our washing machines. It has been estimated that there are well over a quarter of a trillion microcontrollers around today(the chips that run IoT things).

On the one hand, most forms of AIoT, at the end of the network, don’t actually do any significant training, but merely process new data using an existing model, what we in the AI world would call an inference graph. An inference graph is the useable output of AI training; it’s the transferable data file that contains the distilled information of a trained AI model. This is what most “AI” systems use to make predictions, classifications, natural language processing, etc, etc. Systems that merely use an inference graph to do functional AI typically don’t have to continue to train these models after they’re deployed. Especially the sort of AI you might find in a typical IoT device. However, there is evidence that the world of AI is starting to converge. Some end-user systems are now gaining the ability to “learn” new things locally, using data obtained at the end of a network — or completely disconnected from a network.

Not to mention, the number of users buying GPU’s (graphics processing units, especially from Nvidia and AMD), using CUDA (CUDA allows GPU’s to be used to speed up AI performance by a significant amount) has been on the sharp rise in recent years. At the end of last year, Nvidia celebrated 2 million registered developer users (CUDA, Cudnn and other Nvidia software are at the heart of many locally installed AI platforms).

Data From Nvidia Developers Conference

And, a lot of this is being used with Google’s Tensorflow, which was released in 2015. Tensorflow’s popularity over time is quite clear from the following graph, showing the number of ratings across Github of Tensorflow related programs, which in turn are then downloaded and used by hundreds, perhaps thousands of people. And the popularity of Tensoflow continues to rise:

What all this means is that there are millions of users around the world, using their home computers (perhaps even several) as model trainers.

This is in addition to all the people using some form of centralized processing network, such as Google Colab, Microsoft Azure, or Amazon Web Services, to name a few. These are essentially giant server farms with processing power for rent, to run model training “in the cloud.” A single “modest” data center containing just around 5,000 GPU’s could potentially be using over 50 megawatts of electricity in just. One. Single. Day.

The bottom line is that the more training occurs, the more heat is being generated, whether it’s at home or in the cloud.

Regardless of where the heat is coming from, the fact that this highly computation-intensive industry is growing so rapidly, is a matter of great concern.

An informal study was done a few years back, looking at the correlation between GPU’s and room temperature, and the results were significant. It’s clearly possible to create enough heat with a few gaming rigs, AI systems or crypto currency miners to heat a room for the winter.

Here’s a big statistic. It has been calculated that that training a single AI model can produce as much carbon output as 5 vehicles in their lifetimes.

A related hot topic in computing these days is blockchain/cryptocurrency. And its also eating into the GPU market, in a big way. In fact, the crypto market has become such a concern to GPU companies, that they’ve even manufactured new GPU lines to throttle down performance specifically for certain crypto mining algorithms (but most miners have found a way around this). And, there’s a good reason for it. Bitcoin has grown so rapidly, that according to Deutsche Bank, it is now the third largest circulating currency in the world.

By some estimate, Bitcoin alone will raise global temperatures by several degrees in the coming decades.

There is even a growing sentiment that artificial intelligence solutions will soon run on top of the blockchain. Much like the way that NFT’s (Non-Fungible-Tokens) — that are nothing more than digital image files and music mostly made by amateurs — are running on top of the Ethereum network nowadays. As of 3/18/21, there are more than 20 million NFT’s on the OpenSea platform — and that is just one of many. With each transaction on the network potentially costing 45-60 kWh (kilowatt-hours) of electricity, the corresponding output of heat adds up fast. NFT’s, it turns out, may be consuming way more energy than a typical financial transaction — averaging 76 kWh in a recent study (what an average US house uses over several days). And, there are now over 1.2 million transactions over the Ethereum network per day. Now, imagine the heat output when you add a layer of AI on top of that!

With all that modern computing and AI have to offer in terms of potentially transforming the economy, industry, energy, agriculture and medical sectors, isn’t this worth it? If you ask me, is solving cancer or creating faster, better vaccines, or improving agriculture yields, or solving other major problems that face humanity worth it? Absolutely. But is there possibly a smarter way of doing what we’re all doing in AI? Probably.

Here are just a few things that can be done…

Fine Tuning

This is an old trick in AI model development. Well, it’s not a trick so much as it is a sensible common practice. Rather than develop an entire model from scratch, many AI programmers work with the idea that you can simply modify existing, published models, with a small additional amount of training to add something specific that didn’t already exist in that model.

Common AI model accuracy vs time graph, by author

For instance, the models known as Resnet and Mobilenet are both pretty commonly used by AI programmers. You might use Resnet if you wanted a good balance between speed and accuracy. You might use Mobilenet if you wanted a faster recognition of an object in a mobile device, but perhaps are willing to sacrifice some accuracy. Once you have selected your favored ‘net, you would add your own data to it, taking advantage of the existing data that was already used to train it. Resnet-50 for example, is trained with over a million images from Imagenet, and is 50 layers deep. I have used Resnet-50 to train trilobite and ammonite fossils (which were not already in the Resnet model) with science fair students (video here of one great example).

Whenever possible, given the fact that there are already so many base models out there, AI should always be fine-tuned on top of an existing model, not only to make training more energy efficient, but for performance improvements as well.

TinyML

A relatively new concept in the AI world is the idea that a lot of machine learning in general is based on older concepts, inefficient and wasteful of computational resources. Researchers in the past few years have begun to look at new ways of making new forms of AI with the same or better performance than before, but with a much smaller energy profile and thus heat output. The idea of running TinyML at the end of the network is quite appealing as well. Imagine if these hundreds of billions of microcontrollers around the world could indeed process and train data, even in a small way, and contribute their training to a centralized, larger model. Or better yet, “fine tune” a local model that is best adapted to its own environment.

Distributing some of the less computationally intensive, yet power-hungry training tasks to more energy-efficient systems can certainly be an effective way to reduce output heat.

Creating more efficient architectures

The Imagenet competition ran for many years and is now run by Kaggle.com. During its period of peak popularity just a few years back, every big AI company and computer vision expert in the world entered to compete in this competition, to see who could take a standard dataset of images and accurately identify/classify them in the shortest amount of time.

It quickly became apparent that these efforts would soon allow AI to surpass human potential — and did so by 2015:

Imagenet improvement graph, by author

According to the Kaggle website, some entrants in the latest competition were able to achieve 100% accuracy.

So, if it was possible to achieve this level of success in under a decade, what happens next? Now the name of the game is speed and efficiency.

We’re seeing new models and architectures pop up not just every year, or every few months, but every few days (based on anecdotal evidence from Github and Kaggle).

Take for example the convolutional neural network (CNN) family of models known as Efficientnet. It first popped in from Google’s AI Lab in 2019, and was embedded within Tensorflow 2 —you could even find it in the rubber ducky tutorial for TF2’s ODAPI (Object Detection API) in 2020. Efficientnet was able to demonstrate a 10x improvement in both speed and efficiency, simply through improvements in “scaling” (which you can read more about here). This, combined with eager few-shot training, has resulted in some incredible results, within just the past year. I myself was able to fine-tune a rubber-ducky detector, using only five (5!) input images for training:

Here’s an animated gif of the blog author (me, in my sloppy pandemic isolation clothes) testing out a rubber ducky detector in the bathroom, using TF2’s ODAPI with eager few-shot —and the system was trained with only 5 images, tested on totally different/new rubber duckies. Please wait a moment for it to load — it’s big.

Just a couple short years ago, I would have needed dozens, if not hundreds of images to do this same exact thing. The world of AI definitely needs way more of this level of efficiency improvement.

Strawberry farms, chicken coops and tilapia tanks?: Recovering waste AI heat in a practical way

I recently read an article about a family in Canada, using cryptocurrency miners to heat raised beds of strawberries. They’re in an area where electricity is cheap, and they’re putting all that waste GPU heat to good use. What a clever idea!

I’m the sort of oddball hippie techie person who also likes the idea of small scale farming and tiny homes, too. Including chicken, tilapia and all sorts of produce. If you can essentially net-zero (or close to it) by simply using the output heat from your AI and crypto to create warmth in regions (and during warmer seasons) where/when it is most needed, then that will be a step in the right direction toward solving the current climate crisis. Or at the very least, not making it worse!

Here’s more interactive data on global temperature rise, from NASA.

At a time when global warming is accelerating, and funding for AI is increasing around the world (and crypto is rapidly becoming a quasi-viable global currency), this conversation certainly needs to happen now.

I don’t know about you, but I find the idea of GPU waste heat recovery to be the coolest way of alleviating AI’s inherent temperature problem; I suspect we’ll see some of the most creative solutions here in the future.

Author, building a chicken coop, with a few technology enhancements!
Energy-efficient chicken coop, built entirely with recycled materials, like old pallets. See a video I made for children about dinosaurs, with the finished coop, here.

Is widescale adoption of this approach realistic, or scalable? I just don’t know. At the moment, it might be doable in the suburbs, or in places with cheap energy, but admittedly less likely in urban areas (especially in the summertime). But, imagine a future where more and more people are training AI at home for science fairs (check out my blog on AI in science fairs here) or simply for local AIoT TinyML training. The world will surely need more semiconductor investment, and computing resources, and of course, energy. And, we will need to find better ways of harnessing the natural power of the sun, the wind and ocean waves to make that happen.

In the meantime, we will need to do our best to improve AI, reduce its net carbon footprint, and achieve more advancements responsibly, while keeping an eye on the planet’s temperature.

Barnas Monteith

Barnas Monteith is a science fair advocate and part-time paleontologist / AI ninja.