Artificial GENERAL Intelligence: Dousing the Flames

Sunil Manghani
Electronic Life
Published in
8 min readApr 22, 2023

--

There are signs of both excitement and anxiety over the prospect of artificial general intelligence — described as AI that can perform any intellectual task a human can. But there are equally ethical and environmental concerns that weigh heavy. Not least our over-consumption of water. Where there are sparks of promise there are also deep waters…

Artificial general intelligence (AGI) refers to an artificial intelligence system that can perform any intellectual task that a human can. While AGI remains a theoretical concept, recent advancements in machine learning, deep learning, and natural language processing have sparked intense debates about the possibility of AGI becoming a reality in the near future. The word ‘sparked’ here is pertinent. A recent Microsoft research paper, ‘Sparks of Artificial General Intelligence: Early experiments with GPT-4’, controversially sets out a strong case for an ‘early (yet still incomplete) version of an artificial general intelligence (AGI) system’. The paper provides numerous examples of tests and comparisons to consider the advancement gained in the new GPT-4 Large Language Model (LLM).

As the authors of the paper explain, the latest model ‘developed by OpenAI, GPT-4 … was trained using an unprecedented scale of compute and data’. The article (which runs over more than 100 pages) provides a ‘report’ on an investigation of an early version of GPT-4 (so prior to its public release). The authors argue ‘GPT- 4 is part of a new cohort of LLMs (along with ChatGPT and Google’s PaLM for example) that exhibit more general intelligence than previous AI models’, and claim to demonstrate that, ‘beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting’. The case is made that ‘GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT’. Hence, the claim that GPT-4. can ‘reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence’. What researchers are finding, for example, are ‘emergent properties’, whereby an AI model is able to learn something that it was not trained to learn. One of Google’s models is a case in point. It demonstrated the ability to translate into Bengali, despite not being trained to do so.

The excitement surrounding the prospects of AGI is accompanied by anxieties about its potential impact on society. Elon Musk, a notable proponent and developer of AI admits it ‘scares the hell’ out of him. ‘It is capable,’ he suggests, ‘of vastly more than almost anyone knows’. He was recently one of over 1,000 artificial intelligence experts, researchers and backers to sign an open letter calling for an immediate pause on the creation of LLMs. Interestingly, the letter has been found to include false signatures, but this only further reveals a period of excitement, high stakes, and angst. TikTok clips abound with ‘profound’ conversations with key figures in AI. Elon Musk speaks of his early involvement in OpenAI (with the aim to establish a counterweight to Google). The CEO of OpenAI, Sam Altman, discusses with podcaster and AI researcher, Lex Fridman, how he will be one of the few people in the world to witness the first signs of artifical general intelligence: ‘…there will be a room with a few folks who are like, holy shit…’, begins Fridman, to which Altman adds, ‘that happens more often that you would think now’. The MIT physicist and the president of the Future of Life Institute, Max Tegmark, when asked about the timeline for artifical general intelligence replies: ‘Honestly, for the past decade I’ve deliberately given very long timelines because I didn’t want to fuel some kind of stupid Moloch race. But I think that cat has really left the bag now. I think we might be very very close. I don’t think the Microsoft paper is totally off’.

Regardless of where one stands on these issues, the fact that such debates are taking place indicate the research community is beginning to believe that there has been genuine progress. But, and it is a big ‘but’: For all the bravado, scare-stories and wild imaginings, there is a need to pour a healthy cup of cold water upon any first sparks of general intelligence.

Firstly, it is imperative to note of the seroious ethical implications at stake. A recent Time Magazine article reveals how OpenAI employed Kenyan Workers on less than $2 an hour to make ChatGPT ‘less toxic’ (in the process this meant the workers had to be exposed to highly distressing materials). The image above was generated by the magazine, using OpenAI’s image-generation software, Dall-E 2. As the article explains, the prompt used was: ‘A seemingly endless view of African workers at desks in front of computer screens in a printmaking style.’ As the authors of the article note, Time Mazgazine ‘does not typically use AI-generated art to illustrate its stories, but chose to in this instance in order to draw attention to the power of OpenAI’s technology and shed light on the labor that makes it possible.’

A more extensive account of the social and environmental impact of artificial intelligence is provided in Kate Crawford’s excellent book Atlas of AI. Crawford argues clearly that AI systems are not neutral, objective tools but are shaped by their creators’ values, assumptions, and biases. She examines how AI systems are transforming various aspects of our lives, including labor, education, healthcare, criminal justice, and politics, and highlights the ethical and social implications of these changes. The book also addresses the environmental impact of AI, including its energy consumption, carbon emissions, and the depletion of natural resources required to build and maintain AI systems. Through case studies and examples from around the world, she advocates for a more transparent and accountable approach to the development and deployment of AI, one that recognizes its potential risks and promotes its benefits for all.

Recently, the mainstream press have picked up on one specific conccern: water consumption. A research paper (published April 2023), ‘Making AI Less “Thirsty”’, has offered a cogent account of the ‘secret water footprint’ of AI models. The paper explains, for example, how training GPT-3 in Microsoft’s U.S. data centres can consume 700,000 litres of clean freshwater, enough for building 370 BMW cars or 320 Tesla electric vehicles. This amount would triple, the authors add, if training was done in Microsoft’s Asian data centers.

Freshwater scarcity is a pressing global issue, and those involved in building AI models should take social responsibility and lead by example in addressing their own water footprint. As the paper’s authors rightly note:

Severe water scarcity has already been affecting 4 billion people, or approximately two-thirds of the global population, for at least one month each year. Without integrated and inclusive approaches to addressing the global water challenge, nearly half of the world’s population will endure severe water stress by 2030, and roughly one in every four children worldwide will be living in areas subject to extremely high water stress by 2040. — Making AI Less “Thirsty”

The paper proposes a methodology for estimating the water footprint of AI models and discusses the unique spatial-temporal diversities of their runtime water efficiency. It is a complex model that tries to account for ‘Water Usage Effectiveness’ both onsite (at data centres) and offsite (which relates to further complexities of energy consumption). The authors highlight, for example, the necessity of addressing both water and carbon footprints to enable truly sustainable AI. Of course, we should also factor in the benefits from AI, not least the use of AI to help combat climate change (as used in complex weather and climate modelling, and in understanding complex systems more generally).

There are undoubtedly serious considerations, but they do tend to get reported in hyperbolic ways. The opening line of one news article misconstrues the paper’s use of statistics. It reads: A new study titled “Making AI Less Thirsty” reveals that a single conversation you have with ChatGPT amounts to spilling a half-litre bottle of water on the ground’. One potential problem is the conflation of training an AI model and the individual use of a trained model. The individual use of a model such as ChatGPT does not use anything like the amount of energy required to train the model (which the article suggests consumes as much water as a nuclear reactor. While this may be true, we have to understand it is not a constant, on-going form of consumption. And the benefits of a trained model, which can then be deployed ad infinitum is very different to the power consumption, and dangers, of a nuclear reactor!).

The specific reference to a single use of ChatGPT amounting to the spilling a half-litre of water, is actually a statistic for our use of data more generally (which can include anything from accessing LOL Cat videos to connecting with AI data models):

Warehouse-scale data centers — physical “homes” where the majority of AI models, especially large ones like GPT-3 and GPT-4, are physically trained and deployed — are known to be energy-intensive, collectively accounting for 2% of the global electricity usage and large carbon footprint. Nonetheless, what is much less known is that data centers are also extremely “thirsty” and consume an enormous amount of clean freshwater. For example, even excluding water consumption in leased third-party colocation facilities, Google’s self-owned data centers in the U.S. alone consumed 12.7 billion liters of freshwater for on-site cooling in 2021, roughly 90% of which was potable water. […] The combined water footprint of U.S. data centers altogether in 2014 was estimated at 626 billion liters. — Making AI Less “Thirsty”

There is a bigger question about our penchant for digital data tout court. And we might eventually need to come to more general agreements over different degrees of value for different kinds of data and uses. We might have to if the environment in which we are making and consuming the data becomes increasingly sparse and hostile. Currently there are few signs anyone wants to create less personal data. There are few signs of any real public awareness regards the problem. If anything, a scare story about the overuse of water to make AI function is arguably only a decoy!

In truth we are likely going to need to use the intelligence of AI to work out better patterns of energy consumption; better designs for energy production and storage; and better means to ameliorate and prevent climate change. Right now, if you ask ChatGPT whether or not it consumes water in order to function, it replies: ‘As an AI language model, I am a digital entity and do not consume water or any other physical resources to function. I exist solely in computer servers and run on electricity.’ Clearly this is not the whole story, and, for now, only proves GPT’s lack of general intelligence. One trajectory is that we will need to invest in the use of water for AI in order to reach a better state of affairs. Another, is simply to switch off the data centres and go back to just drinking water.

--

--

Sunil Manghani
Electronic Life

Professor of Theory, Practice & Critique at University of Southampton, Fellow of Alan Turing Institute for AI, and managing editor of Theory, Culture & Society.