Sustainable AI
The solution to generative AI’s high cost for electricity and water
Not long ago, the idea of massively increasing global energy usage was frowned upon. Governments and industry were lining up to pledge net-zero CO2 emissions.
Enter generative AI, a technology destined to increase energy consumption worldwide due to its need for data centers. But the massive impacts of the datacenter expansions will be hitting sustainable goals hard: power usage is being increased and drinking water is being consumed to cool electronics, depleting available human supplies and adding heated water to the environment.
What is Unsustainable?
It is generally considered unsustainable to rapidly increase our usage of power and water. The increase in power, such as with the increase in coal and natural gas generators, produces CO2. The increase in water usage required to cool servers, removes water from local inhabitants if the supply capacity isn’t first increased.
The current focus is on ramping up the use of energy to supply power to datacenters to enable the Magnificent Seven Stocks:
“the group consists of Alphabet, Amazon, Apple, Meta Platforms, Microsoft, Nvidia, and Tesla.”
Let’s go into detail about what is being proposed to deploy more generative AI.
AI Demands New Fossil Fuel Powered Datacenters
There are many alarming articles, (with my emphasis below) such as in Nature (click):
“Parmelee has mapped 159 proposed data centres or expansions of existing ones in Virginia, where they account for more than one-quarter of the state’s electricity use”
and the World Economic Forum (click):
Overall, the computational power needed for sustaining AI’s growth is doubling roughly every 100 days.
and the International Energy Agency (IEA) (click):
“In the United States, power consumption by data centres is on course to account for almost half of the growth in electricity demand between now and 2030. Driven by AI use, the US economy is set to consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminium, steel, cement and chemicals.”
and MIT (click):
“Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.”
and
“The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants.”
Eric Schmidt ex-CEO Google
Perhaps the best summary of the scale of the power problems being caused by generative AI and its need for data centers comes from the ex-Google CEO, Dr. Eric Schmidt, in this YouTube video (click).
A screen shot from the YouTube video mentioned above with Dr. Eric Schmidt.
“… data centers will require an additional 29 gigawatts of power by 2027 and 67 more gigawatts by 2030.”
And further, I’ll add that this may need the reintroduction of coal-fired power stations to move quickly enough. As Dr. Schmidt puts it:
“In terms of energy planning, the current model is mostly natural gas, nuclear plants plus renewables… We have a bunch of regulatory issues around fixing the energy grid. It takes on average 18 years to get the power transmissions to put these things in place.”
But the energy grid wouldn’t need ‘fixing’ without the demand being created by today’s statistical AI — AI that is Unsustainable. By design.
Why Are Power Needs Increasing?
Generative AI is Unsustainable
Generative AI is wastefully inefficient, designed to be trained and run in real time by deploying GPUs, or Graphics Processing Units. GPUs are specialized electronics designed for accelerating the creation and rendering of images, video, and animations and now for large-scale computations required by machine learning and deep learning algorithms.
While human-based linguistic models are based around small sequences of words (phrases), today’s popular AI uses brute force computations to determine phrases with statistics, processing all the statistics needed to find the phrases, not finding the phrases first. These computations are immense and are used for the text/tokens entered, since the only way to determine their phrases is after statistical analysis.
Cognitive AI, such as from Patom theory, is Sustainable
By emulating the brain, we are working towards low-energy models since the brain is low-energy. Linguistics, such as shown in Role and Reference Grammar (RRG), maps sentences into Reference Phrases (RP) and sentences into the Layered Structure of the Clause (LSC). In implementations with Patom theory, the RPs are recognized and validated first, followed by the LSCs that use them.
This is fundamentally efficient by comparison with generative AI because it resolves words into phrases without a need to find statistics outside of a phrase. A phrase boundary limits the validation of it. This is syntax working with semantics in context.
Good Old Fashioned AI (GOFAI) and Chomsky’s Parsing
Before RRG and Patom theory, the world’s AI models were rules-based. Rules tended to be built around parts of speech, syntactic concepts like noun, verb, adjective, adverb and preposition in English.
As Google supports in their SyntaxNet announcement (click):
“Given a sentence as input, it tags each word with a part-of-speech (POS) tag that describes the word’s syntactic function, and it determines the syntactic relationships between words in the sentence.”
These concepts are fundamentally ambiguous in parsing, with Google citing the scale of ambiguity below:
“One of the main problems that makes parsing so challenging is that human languages show remarkable levels of ambiguity. It is not uncommon for moderate length sentences — say 20 or 30 words in length — to have hundreds, thousands, or even tens of thousands of possible syntactic structures.”
Decades ago, human beings were unable to solve GOFAI with rules that expand exponentially. But the use of parts of speech, that causes the combinatorial explosion, can be replaced.
But what would you replace parts of speech with to align with a human brain?
Human brains can certainly tell the difference between things (call them referents) and states, properties and activities (call them predicates). These are semantic or meaning-based concepts because our senses can readily categorize them.
By swapping out the ambiguity of syntactic models (parts of speech) with semantic elements (referents and predicates), the problems of the rules-based models from GOFAI vanish and can be solved in real time.
Conclusion
Today’s generative AI requires not only improvements in performance to achieve the goals of AGI/ASI and so on (better than human capabilities), but also new datacenters with larger electricity and water quantities.
Generative AI like ChatGPT is unsustainable.
The alternative comes from cognitive AI, such as that built on the bedrock of RRG and Patom brain theory. Not only is there the potential to solve many otherwise unsolved problems with generative AI, but it can run on devices without the need for additional datacenters, power generation or water cooling plants.
We have a decision to make. Do we continue to speculate that generative AI can be improved to produce profitable products one day, albeit with a need for unsustainable power and cooling? Or do we consider the working alternatives that have yet to have investments (billions of investment dollars for generative AI) that are also sustainable?
We can bring Sustainable AI to society that benefits us all. Please join us on the journey.
Do you want to get more involved?
If you want to get involved with our upcoming project to enable a gamified language-learning system, the site to track progress is here (click). You can keep informed by putting your email to the contact list or add a small deposit to become a VIP.
Do you want to read more?
If you want to read about the application of brain science to the problems of AI, you can read my latest book, “How to Solve AI with Our Brain: The Final Frontier in Science” to explain the facets of brain science we can apply and why the best analogy today is the brain as a pattern-matcher. The book link is here on Amazon in the US (and elsewhere).
In the cover design below, you can see the human brain incorporating its senses, such as the eyes. The brain’s use is being applied to a human-like robot who is being improved with brain science towards full human emulation in looks and capability.

