Plugging in AI, Part 1: escalating demand on the grid to power the computing revolution

Josh Rapperport
Innovation Endeavors
8 min readApr 25, 2024

--

This will be a 2-part series, beginning with the problem statement, followed by the solutions we are excited to see.

Compute will be the currency of the future. If that’s true, then it will be pegged to energy. This reality — that scaling AI hinges on scaling our electrical infrastructure — has become starkly clear in the last year, leading AI leaders like Sam Altman to acknowledge that a platform shift in computing will require massive amounts of power. In March, Altman said that “energy is the hardest part” of the journey to scale AI, and this week, he announced his investment in Exowatt, a solar power data center company. However, few AI leaders will explicitly acknowledge the more concerning truth, namely that deploying massive compute resources and interconnecting them to the grid will directly displace electrical loads that are actually decarbonizing.

AI is simply more energy demand, no different than more air conditioning (granted, AI can likely provide broader solutions to climate change over time). All of the electrical load needed for EVs, green industrials, and electrifying buildings will be in competition for capacity on the grid — capacity that was scarce even before the advent of AI. In other words, independent of AI, we will need at least three times more electricity to decarbonize the global economy, and as data center demand explodes, our finite grid capacity will be strained by both emissions-reducing electrical loads and computing demand. Therefore, the bottleneck on the grid is one of the most important challenges facing the energy transition, but it’s also quickly becoming one of the most immediate risks to scaling AI. Solutions that allow AI to scale — without displacing critical decarbonization infrastructure — are essential to avoid derailing our collective emissions reduction goals.

The relationship between emerging digital technology and energy has historically been overlooked and is only now coming to the forefront of the public discourse. The venn-diagram of experts who understand the esoteric nature of power markets and who understand the future electricity demand of AI is small. This lack of overlap — combined with the utilities being highly regulated, poorly incentivized, and slow to adopt novel machine learning and data science approaches — is, in large part, why the multi-trillion dollar energy sector and the grid itself have remained slow to innovate and anticipate impending complexities.

Electricity consumption in North America and Europe has remained relatively flat for the previous two decades, but since 2020, as the energy transition to clean electric power for mobility and buildings has taken off, we’ve arrived at an inflection point. For some forward thinking folks (kudos to Evan Caron, Shayle Kahn, and Chase Lochmiller), it’s been clear for years that the megawatt hour (MWh) was going to become the fundamental currency of innovation. As AI joins the party, on top of already climbing electricity demand for the energy transition, that future is here. Some estimates indicate the amount of electricity required to power the world’s data centers could jump by 50% by 2027, and this month, nine of the top ten U.S. electric utilities indicated that data centers are a main source of customer growth.

So why exactly is this global experiment so challenging?

Part 1: The problem — the electricity gauntlet

Before the AI inflection point

The escalating supply <> demand crunch on the grid has been a slow-motion car crash for decades, coined by Energy Impact Partners as the electricity gauntlet. To understand how we got here, we need to dig into how the planning and expansion of electrical load on the grid is managed.

In order to plug any generation or electrical asset into the grid, it must go into the utility or Independent System Operator (ISO) interconnection queue. The ISO is an NGO that is generally responsible for ensuring the reliable operation of the electrical grid within a specific geographic area, overseeing the transmission of electricity, managing grid reliability, coordinating power generation and distribution, and facilitating wholesale electricity markets. ISOs also often develop and implement standards and protocols for grid operation, including management of grid congestion, maintenance of frequency and voltage, and compliance with regulatory requirements.

Electrical utilities play the most important role: they own and operate the actual grid infrastructure for generating, transmitting, and distributing electricity to consumers within a specific service territory. Utilities are responsible for building and maintaining power plants, substations, transmission lines, and distribution networks to deliver electricity reliably and safely to homes, businesses, and factories. They typically provide services like billing, customer support, outage response, and energy efficiency programs to end customers.

In unregulated markets such as Texas and parts of the northeast, the ISO handles the lineup of generation and new supply, like power plants, solar, and wind, while the utility manages the demand side, encompassing all electrical loads on the grid, such as buildings, charging infrastructure, and increasingly, data centers. In regulated markets, the ISO and the utility are more tightly coupled, and the utility is responsible for both, such as PG&E in California.

The Federal Energy Regulatory Commission (FERC) regulates the power sector and strictly manages the communication between ISOs and utilities. Even within the same organization, like PG&E, planning and interconnection management is often not well coordinated across generation and demand. Siloed data and fundamental mismatches across large geographies lead to huge delays, bottlenecks, and backlogs in interconnection. The other way to slice this problem is transmission scale vs distribution scale. If you’re small and don’t need system upgrades to deploy your project (distribution scale), developers can work with the utility, whereas if you’re larger and stand to impact the transmission network (transmission scale), the ISO gets involved.

The complexity of this planning and interconnection process is exploding as the grid becomes more decentralized and intermittent with increasing renewables penetration. The grid was designed in the early 20th century for large-scale, dispatchable power plants (i.e., coal and natural gas). Decades later, accelerating penetration of smaller, less predictable assets (e.g., rooftop solar) is making scenario modeling and optimization significantly more difficult for these legacy organizations. Today, roughly 80% of power is dispatchable in the US, and declining at an accelerating rate.

Perhaps the simplest way to describe the current struggle of the grid is the interconnection queue backlog. In the last two years, the interconnection backlog has grown 40% a year, exploding to 2600 GW (which is more than all existing U.S. generation today), 80% of which is currently solar and storage projects. The time to interconnect has ballooned from two years per project in 2010 to five years today. In many ways, this is a tremendously encouraging signal of progress to electrifying our economy. The global deployment of renewable energy grew 50% year over year in 2023, and at least a 20% CAGR is expected for the coming decade in the U.S.

This growth rate is both astounding and unexpected. For example, The PJM Interconnection, the largest ISO in the U.S, this year tripled its growth expectations for electricity use over the next decade in its service footprint. This rate of change has surprised even the experts, and speaks to accelerating electrification and the fragmented coordination and planning process across key constituents.

Satisfying the accelerating demand for EV charging and building electrification will be a massive undertaking. California recently published a study indicating that the state will require $50B in grid, transmission, and distribution upgrades by 2035, after a decade of lower-than-historically-average infrastructure deployment. This estimate could be off as a result of regulatory decisions, grid program design decisions, or market decisions based on customer or auto OEM behavior, but are likely directionally right. To electrify 13M vehicles — the 2035 target for California — will become a more than ten-year interconnection problem.

Utilities, through no fault of their own, are historically quite poorly incentivized to find the most optimal and low-cost path to grid upgrades given the rate payer model, which mandates they only get paid for building more CAPEX like transmission lines, substations, and transformers. For this reason, utilities pay less attention to optimizing distribution (the grid downstream from a substation that connects transmission to buildings and infrastructure), and instead focus on building more stuff, even if it’s sometimes redundant or overbuilt. Utilities are increasingly experiencing access to capital challenges and declining creditworthiness due to the recent disasters around fire and mismanagement.

Despite all these challenges, there is some good news in terms of utilities’ role in addressing the escalating electricity crisis. Utilities certainly want more load on the grid so they can charge consumers for it. Therefore, if it becomes unaffordable to buy an electric vehicle due to grid constraints, they won’t see gains in demand. Utilities are eager to scale load — EVs, building electrification, and data centers — but they are less excited about the challenges of intermittent generation. Grid operators have historically been slow and unoptimized in how they go about addressing this challenge. Jigar Shah, the director of the Loan Programs Office in the US Department of Energy, is addressing this by pushing for change.

After the AI inflection point and the coming crisis

Everything described above was true long before the explosion of AI and compute demand, and that demand has arrived with remarkable pace. The CEO of Arm, the $100B chip company, recently highlighted that AI models “are just insatiable in terms of their thirst” for electricity. “By the end of the decade, AI data centers could consume as much as 20% to 25% of U.S. power requirements. Today, that’s probably 4% or less.” Some states like New Mexico expect 5x more power usage in the coming decade than the last 100 years combined. Texas power company Vistra has seen its stock surge 85% year to date, largely based on AI hype.

All the major tech companies have woken up to these challenges, and many have been planning how to strategically manage electricity needs for over a decade. Google has been a leader in carbon-free energy for years and has made strong progress in hitting its 2030 goals of 100%, 24/7 clean power for all operations, including data centers. But, now, as their appetite for compute, and thereby more electricity, has reached a fever pitch, the path forward is becoming fraught. Bill Gates and 20 tech exes gathered at a summit with utilities in March to discuss this exact issue, which is surprising for an energy industry that tends to take years to respond to new market conditions. Similarly, Texas’ ISO, ERCOT, is hosting its inaugural innovation summit this year.

So, where do we go from here? How can we scale compute and hit our decarbonization goals? The solution will require mass coordination and capital across grid operators, utilities, energy developers, tech companies, and startups. In our next piece, we will share some of the solutions we hope to see.

Thank you to Evan Caron, James McWalter, and Aram Shumavon for reviewing this.

--

--

Josh Rapperport
Innovation Endeavors

Early stage VC @ Innovation Endeavors. Focused on climate and industrials (energy, construction, mining, manufacturing)